Files
deer-flow/backend/packages/harness/deerflow/config/model_config.py
T
greatmengqi edf345cd72 refactor(config): eliminate global mutable state, wire DeerFlowContext into runtime
- Freeze all config models (AppConfig + 15 sub-configs) with frozen=True
- Purify from_file() — remove 9 load_*_from_dict() side-effect calls
- Replace mtime/reload/push/pop machinery with single ContextVar + init_app_config()
- Delete 10 sub-module globals and their getters/setters/loaders
- Migrate 50+ consumers from get_*_config() to get_app_config().xxx

- Expand DeerFlowContext: app_config + thread_id + agent_name (frozen dataclass)
- Wire into Gateway runtime (worker.py) and DeerFlowClient via context= parameter
- Remove sandbox_id from runtime.context — flows through ThreadState.sandbox only
- Middleware/tools access runtime.context directly via Runtime[DeerFlowContext] generic
- resolve_context() retained at server entry points for LangGraph Server fallback
2026-04-14 01:18:19 +08:00

42 lines
2.1 KiB
Python

from pydantic import BaseModel, ConfigDict, Field
class ModelConfig(BaseModel):
"""Config section for a model"""
name: str = Field(..., description="Unique name for the model")
display_name: str | None = Field(..., default_factory=lambda: None, description="Display name for the model")
description: str | None = Field(..., default_factory=lambda: None, description="Description for the model")
use: str = Field(
...,
description="Class path of the model provider(e.g. langchain_openai.ChatOpenAI)",
)
model: str = Field(..., description="Model name")
model_config = ConfigDict(extra="allow", frozen=True)
use_responses_api: bool | None = Field(
default=None,
description="Whether to route OpenAI ChatOpenAI calls through the /v1/responses API",
)
output_version: str | None = Field(
default=None,
description="Structured output version for OpenAI responses content, e.g. responses/v1",
)
supports_thinking: bool = Field(default_factory=lambda: False, description="Whether the model supports thinking")
supports_reasoning_effort: bool = Field(default_factory=lambda: False, description="Whether the model supports reasoning effort")
when_thinking_enabled: dict | None = Field(
default_factory=lambda: None,
description="Extra settings to be passed to the model when thinking is enabled",
)
when_thinking_disabled: dict | None = Field(
default_factory=lambda: None,
description="Extra settings to be passed to the model when thinking is disabled",
)
supports_vision: bool = Field(default_factory=lambda: False, description="Whether the model supports vision/image inputs")
thinking: dict | None = Field(
default_factory=lambda: None,
description=(
"Thinking settings for the model. If provided, these settings will be passed to the model when thinking is enabled. "
"This is a shortcut for `when_thinking_enabled` and will be merged with `when_thinking_enabled` if both are provided."
),
)