Compare commits

...

62 Commits

Author SHA1 Message Date
Henry Li fa20cc9a98 chore: update mock data 2025-07-23 09:02:04 +08:00
DanielWalnut c7edaf3e84 refine the research prompt (#459) 2025-07-22 14:13:10 +08:00
orifake e6ba1fcd82 fix: JSON parse error in link.tsx (#448)
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-20 14:14:18 +08:00
Willem Jiang 4d65d20f01 fix: keep applying quick fix for #446 (#450)
* fix: the Backend returns 400 error

* fix: keep applying quick fix

* fix the lint error

* fixed .env.example settings
2025-07-20 14:10:46 +08:00
Willem Jiang ff67366c5c fix: the Backend returns 400 error (#449) 2025-07-20 11:38:18 +08:00
Willem Jiang d34f48819d feat: polish the mcp-server configure feature (#447)
* feat: disable the MCP server configuation by default

* Fixed the lint and test errors

* fix the lint error

* feat:update the mcp config documents and tests

* fixed the lint errors
2025-07-19 09:33:32 +08:00
Willem Jiang 75ad3e0dc6 feat: disable the MCP server configuation by default (#444)
* feat: disable the MCP server configuation by default

* Fixed the lint and test errors

* fix the lint error
2025-07-19 08:39:42 +08:00
DanielWalnut dbb24d7d14 fix: fix the bug introduced by coordinator messages update (#445) 2025-07-18 21:36:13 +08:00
Willem Jiang 933f3bb83a feat: add CORS setting for the backend application (#443)
* feat: add CORS setting for the backend application

* fix the formate issue
2025-07-18 18:04:03 +08:00
道心坚定韩道友 f17b06f206 fix:planner AttributeError 'list' object has no attribute 'get' (#436) 2025-07-18 09:27:15 +08:00
xiaofeng c14c548e0c fix:The console UI directly throws an error when user input is empty (#438) 2025-07-17 18:15:13 +08:00
Kuro Akuta c89b35805d fix: fix the coordinator's forgetting of its own messages. (#433) 2025-07-17 08:36:31 +08:00
DanielWalnut 774473cc18 fix: fix unit test cases for prompt enhancer (#431) 2025-07-16 11:41:43 +08:00
Affan Shaikhsurab b04225b7c8 fix: handle empty agent tuple in streaming workflow (#427)
Prevents IndexError when agent[0] is accessed on empty tuple,
resolving display issues with Gemini 2.0 Flash model.

Fixes #425
2025-07-16 08:59:11 +08:00
DanielWalnut b155e1eca6 refactor: refine the prompt enhancer pipeline (#426) 2025-07-15 19:21:59 +08:00
DanielWalnut 448001f532 refactor: human feedback doesn't need to check enough context (#423) 2025-07-15 18:51:41 +08:00
Willem Jiang 0f118fda92 fix: clean up the builder code (#417)
* fix: clean up the builder code

* fix:reformat the code
2025-07-15 17:22:50 +08:00
Affan Shaikhsurab ae30517f48 Update configuration_guide.md (#416)
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-14 19:05:07 +08:00
orifake 8bdc6bfa2d fix: missing i18n message (#410)
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-14 18:56:17 +08:00
HagonChan afbcdd68d8 fix: add missing translation for chat.page (#409)
* fix: Error: MISSING_MESSAGE: Could not resolve chat.page` in messages for locale 'en'

Fixed a `MISSING_MESSAGE` error that was occurring on the chat page due to missing translation keys for `chat.page` in the internationalization messages.

* Update en.json
2025-07-14 18:54:01 +08:00
Willem Jiang bf3bcee8e3 fix: main build fix for the merge #237 (#407) 2025-07-13 09:44:28 +08:00
Shiwen Cheng 0c46f8361b feat: support AzureChatOpenAI under configuring azure_endpoint or AZURE_OPENAI_ENDPOINT (#237)
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-13 09:27:57 +08:00
Willem Jiang 86a89acac3 fix: update the reasoning model url in conf.yaml.example (#406) 2025-07-13 08:22:37 +08:00
Willem Jiang 2121510f63 fix:catch toolCalls doesn't return validate json (#405)
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
2025-07-12 23:31:43 +08:00
cmq2525 0dc6c16c42 fix: repair_json_output cannot process msgs that do not starts with {, [ or ``` (#384)
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-12 23:29:22 +08:00
vvky 5abf8c1f5e fix: correctly remove outermost code block markers in model responses (fix markdown rendering issue) (#386)
* fix: correctly remove outermost code block markers in frontend

* fix: correctly remove outermost quote block markers in 'dropMarkdownQuote'

* fix: correctly remove outermost quote block markers in 'dropMarkdownQuote'

* fix: correctly remove outermost quote block markers in 'dropMarkdownQuote'

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-12 22:19:30 +08:00
Willem Jiang 70b86d8464 feat: add the Chinese i8n support on the setting table (#404)
* feat: Added i8n to the mcp table

* feat: Added i8n to the about table
2025-07-12 21:28:08 +08:00
johnny0120 e1187d7d02 feat: add i18n support and add Chinese (#372)
* feat: add i18n support and add Chinese

* fix: resolve conflicts

* Update en.json with cancle settings

* Update zh.json with settngs cancle

---------

Co-authored-by: johnny0120 <15564476+johnny0120@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
2025-07-12 15:18:28 +08:00
Willem Jiang 136f7eaa4e fix:upgrade uv version to avoid the big change of uv.lock (#402)
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
2025-07-12 14:46:17 +08:00
Willem Jiang 3c46201ff0 fix: fix the lint check errors of the main branch (#403) 2025-07-12 14:43:25 +08:00
yihong 2363b21447 fix: some lint fix using tools (#98)
* fix: some lint fix using tools

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: md lint

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: some lint fix using tools

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-12 13:59:02 +08:00
Willem Jiang 0d3255cdae feat: add the vscode unit test debug settings (#346) 2025-07-12 13:27:47 +08:00
Kirk Lin 9f8f060506 feat(llm): Add retry mechanism for LLM API calls (#400)
* feat(llm): Add retry mechanism for LLM API calls

* feat: configure max_retries for LLM calls via conf.yaml

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-12 10:12:07 +08:00
HagonChan dfd4712d9f feat: add Domain Control Features for Tavily Search Engine (#401)
* feat: add Domain Control Features for Tavily Search Engine

* fixed

* chore: update config.md
2025-07-12 08:53:51 +08:00
MaojiaSheng 859c6e3c5d doc: add knowledgebase rag examples in readme (#383)
* doc: add private knowledgebase examples in readme

* doc: add private knowledgebase examples in readme

* doc: add private knowledgebase examples in readme

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-07-07 13:05:01 +08:00
Willem Jiang d8016809b2 fix: the typo of setup-uv action (#393)
* fix: spine the github hash on the third party actions

* fix: the typo of action

* fix: try to fix the build by specify the action version
2025-07-07 08:43:11 +08:00
Willem Jiang 6c254c0783 fix: spine the github hash on the third party actions (#392) 2025-07-07 08:18:17 +08:00
殷逸维 d4fbc86b28 fix: docker build (#385)
* fix: docker build

* modify base docker image
2025-07-05 11:07:52 +08:00
Abeautifulsnow 7ad11bf86c refactor: simplify style mapping by using upper case only (#378)
* improve: add abort btn to abort the mcp add request.

* refactor: simplify style mapping by using upper case only

* format: execute uv run black --preview . to format python files.
2025-07-04 08:27:20 +08:00
殷逸维 be893eae2b feat: integrate VikingDB Knowledge Base into rag retrieving tool (#381)
Co-authored-by: Henry Li <henry1943@163.com>
2025-07-03 10:06:42 +08:00
Johannes Maron 5977b4a03e Publish containers to GitHub (#375)
This workflow creates two offical container images:

* `ghcr.io/codingjoe/deer-flow:main`
* `ghcr.io/codingjoe/deer-flow-web:main`
2025-06-29 20:55:51 +08:00
JeffJiang 52dfdd83ae fix: next server fetch error (#374) 2025-06-27 14:23:04 +08:00
Willem Jiang f27c96e692 fix: the lint error of llm.py (#369) 2025-06-26 10:36:26 +08:00
Tony M b7373fbe70 Add support for self-signed certs from model providers (#276)
* Add support for self-signed certs from model providers

* cleanup

---------

Co-authored-by: tonydoesathing <tmastromarino@cpacket.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
2025-06-26 10:17:26 +08:00
Abeautifulsnow 9c2d4724e3 improve: add abort btn to abort the mcp add request. (#284) 2025-06-26 08:51:46 +08:00
johnny0120 aa06cd6fb6 fix: replace json before js fence (#344) 2025-06-26 08:40:32 +08:00
Young 82e1b65792 fix: settings tab display name (#250) 2025-06-19 14:33:00 +08:00
Willem Jiang dcdd7288ed test: add unit tests of the app (#305)
* test: add unit tests in server

* test: add unit tests of app.py in server

* test: reformat the codes

* test: add more tests to cover the exception part

* test: add more tests on the server app part

* fix: don't show the detail exception to the client

* test: try to fix the CI test

* fix: keep the TTS API call without exposure information

* Fixed the unit test errors

* Fixed the lint error
2025-06-18 14:13:05 +08:00
Willem Jiang 89f3d731c9 Fix: the test errors of test_nodes (#345) 2025-06-18 11:59:33 +08:00
Willem Jiang c0b04aaba2 test: add unit tests for graph (#296)
* test: added unit test of builder

* test: Add unit tests for nodes.py

* test: add more unit tests in test_nodes

* test: try to fix the unit test error on GitHub

* test: reformate the code of test_nodes.py

* Fix the test error of reset the local argument

* Fixed the test error by setup args

* reformat the code
2025-06-18 10:05:02 +08:00
Willem Jiang 4048ca67dd test: add test of json_utils (#309)
* test: add test of json_utils

* reformat the code
2025-06-18 10:04:46 +08:00
lelili2021 30a189cf26 fix: update several links related to volcengine in Readme (#333) 2025-06-17 08:49:29 +08:00
Ryan Guo e03b12b97f doc: provide a workable guideline update for ollama user (#323) 2025-06-17 08:47:54 +08:00
Luludle 8823ffdb6a fix: add line breaks to mcp edit dialog (#313) 2025-06-17 08:31:35 +08:00
3Spiders 4fe43153b1 fix(web): priority displayName for settings name error (#336) 2025-06-17 08:26:13 +08:00
Willem Jiang 4fb053b6d2 Revert "fix: solves the malformed json output and pydantic validation error p…" (#325)
This reverts commit a7315b46df.
2025-06-14 22:04:03 +08:00
DanielWalnut 19fa1e97c3 feat: add deep think feature (#311)
* feat: implement backend logic

* feat: implement api/config endpoint

* rename the symbol

* feat: re-implement configuration at client-side

* feat: add client-side of deep thinking

* fix backend bug

* feat: add reasoning block

* docs: update readme

* fix: translate into English

* fix: change icon to lightbulb

* feat: ignore more bad cases

* feat: adjust thinking layout, and implement auto scrolling

* docs: add comments

---------

Co-authored-by: Henry Li <henry1943@163.com>
2025-06-14 13:12:43 +08:00
Tax a7315b46df fix: solves the malformed json output and pydantic validation error produced by the 'planner' node by forcing the llm response to strictly comply with the pydantic 'Plan' model (#322) 2025-06-14 10:13:30 +08:00
JeffJiang 03e6a1a6e7 fix: mcp config styles (#320) 2025-06-13 18:01:19 +08:00
Lan 7d38e5f900 feat: append try catch (#280) 2025-06-12 20:43:50 +08:00
Willem Jiang 4c2fe2e7f5 test: add more unit tests of tools (#315)
* test: add more test on test_tts.py

* test: add unit test of search and retriever in tools

* test: remove the main code of search.py

* test: add the travily_search unit test

* reformate the codes

* test: add unit tests of tools

* Added the pytest-asyncio dependency

* added the license header of test_tavily_search_api_wrapper.py
2025-06-12 20:43:32 +08:00
Henry Li bb7dc6e98c docs: add VolcEngine introduction. (#314) 2025-06-12 13:36:29 +08:00
136 changed files with 37291 additions and 2126 deletions
+17
View File
@@ -7,6 +7,17 @@ NEXT_PUBLIC_API_URL="http://localhost:8000/api"
AGENT_RECURSION_LIMIT=30
# CORS settings
# Comma-separated list of allowed origins for CORS requests
# Example: ALLOWED_ORIGINS=http://localhost:3000,http://example.com
ALLOWED_ORIGINS=http://localhost:3000
# Enable or disable MCP server configuration, the default is false.
# Please enable this feature before securing your front-end and back-end in a managed environment.
# Otherwise, you system could be compromised.
ENABLE_MCP_SERVER_CONFIGURATION=false
# Search Engine, Supported values: tavily (recommended), duckduckgo, brave_search, arxiv
SEARCH_API=tavily
TAVILY_API_KEY=tvly-xxx
@@ -14,6 +25,12 @@ TAVILY_API_KEY=tvly-xxx
# JINA_API_KEY=jina_xxx # Optional, default is None
# Optional, RAG provider
# RAG_PROVIDER=vikingdb_knowledge_base
# VIKINGDB_KNOWLEDGE_BASE_API_URL="api-knowledgebase.mlp.cn-beijing.volces.com"
# VIKINGDB_KNOWLEDGE_BASE_API_AK="AKxxx"
# VIKINGDB_KNOWLEDGE_BASE_API_SK=""
# VIKINGDB_KNOWLEDGE_BASE_RETRIEVAL_SIZE=15
# RAG_PROVIDER=ragflow
# RAGFLOW_API_URL="http://localhost:9388"
# RAGFLOW_API_KEY="ragflow-xxx"
+95
View File
@@ -0,0 +1,95 @@
name: Publish Containers
on:
push:
branches:
- main
release:
types: [published]
workflow_dispatch:
jobs:
backend-container:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to the Container registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
id: push
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v2
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
subject-digest: ${{ steps.push.outputs.digest }}
push-to-registry: true
frontend-container:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
attestations: write
id-token: write
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}-web
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to the Container registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 #v3.4.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 #v5.7.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
id: push
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 #v6.18.0
with:
context: web
file: web/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v2
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
subject-digest: ${{ steps.push.outputs.digest }}
push-to-registry: true
+1 -1
View File
@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v6.3.1
with:
version: "latest"
+1 -1
View File
@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v6.3.1
with:
version: "latest"
+1
View File
@@ -6,6 +6,7 @@ dist/
wheels/
*.egg-info
.coverage
.coverage.*
agent_history.gif
static/browser_history/*.gif
+30
View File
@@ -1,6 +1,36 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug Tests",
"type": "debugpy",
"request": "launch",
"module": "pytest",
"args": [
"${workspaceFolder}/tests",
"-v",
"-s"
],
"console": "integratedTerminal",
"justMyCode": false,
"env": {
"PYTHONPATH": "${workspaceFolder}"
}
},
{
"name": "Debug Current Test File",
"type": "debugpy",
"request": "launch",
"module": "pytest",
"args": [
"${file}",
"-v",
"-s"
],
"console": "integratedTerminal",
"justMyCode": false
},
{
"name": "Python: 当前文件",
"type": "debugpy",
+7
View File
@@ -0,0 +1,7 @@
{
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}
+11
View File
@@ -17,11 +17,14 @@ There are many ways you can contribute to DeerFlow:
1. Fork the repository
2. Clone your fork:
```bash
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
```
3. Set up your development environment:
```bash
# Install dependencies, uv will take care of the python interpreter and venv creation
uv sync
@@ -30,7 +33,9 @@ There are many ways you can contribute to DeerFlow:
uv pip install -e ".[dev]"
uv pip install -e ".[test]"
```
4. Configure pre-commit hooks:
```bash
chmod +x pre-commit
ln -s ../../pre-commit .git/hooks/pre-commit
@@ -39,6 +44,7 @@ There are many ways you can contribute to DeerFlow:
## Development Process
1. Create a new branch:
```bash
git checkout -b feature/amazing-feature
```
@@ -50,6 +56,7 @@ There are many ways you can contribute to DeerFlow:
- Update documentation as needed
3. Run tests and checks:
```bash
make test # Run tests
make lint # Run linting
@@ -58,11 +65,13 @@ There are many ways you can contribute to DeerFlow:
```
4. Commit your changes:
```bash
git commit -m 'Add some amazing feature'
```
5. Push to your fork:
```bash
git push origin feature/amazing-feature
```
@@ -90,6 +99,7 @@ There are many ways you can contribute to DeerFlow:
## Testing
Run the test suite:
```bash
# Run all tests
make test
@@ -122,6 +132,7 @@ make format
## Need Help?
If you need help with anything:
- Check existing issues and discussions
- Join our community channels
- Ask questions in discussions
+1 -1
View File
@@ -1,4 +1,4 @@
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
FROM ghcr.io/astral-sh/uv:python3.12-bookworm
# Install uv.
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
+1
View File
@@ -8,6 +8,7 @@ format:
lint:
uv run black --check .
uv run ruff check .
serve:
uv run server.py --reload
+21 -14
View File
@@ -12,13 +12,16 @@
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) is a community-driven Deep Research framework that builds upon the incredible work of the open source community. Our goal is to combine language models with specialized tools for tasks like web search, crawling, and Python code execution, while giving back to the community that made this possible.
Currently, DeerFlow has officially entered the [FaaS Application Center of Volcengine](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/market). Users can experience it online through the [experience link](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/market/deerflow/?channel=github&source=deerflow) to intuitively feel its powerful functions and convenient operations. At the same time, to meet the deployment needs of different users, DeerFlow supports one-click deployment based on Volcengine. Click the [deployment link](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/application/create?templateId=683adf9e372daa0008aaed5c&channel=github&source=deerflow) to quickly complete the deployment process and start an efficient research journey.
Please visit [our official website](https://deerflow.tech/) for more details.
## Demo
### Video
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
In this demo, we showcase how to use DeerFlow to:
@@ -144,19 +147,18 @@ Explore more details in the [`web`](./web/) directory.
## Supported Search Engines
### Web Search
DeerFlow supports multiple search engines that can be configured in your `.env` file using the `SEARCH_API` variable:
- **Tavily** (default): A specialized search API for AI applications
- Requires `TAVILY_API_KEY` in your `.env` file
- Sign up at: https://app.tavily.com/home
- **DuckDuckGo**: Privacy-focused search engine
- No API key required
- **Brave Search**: Privacy-focused search engine with advanced features
- Requires `BRAVE_SEARCH_API_KEY` in your `.env` file
- Sign up at: https://brave.com/search/api/
@@ -171,6 +173,19 @@ To configure your preferred search engine, set the `SEARCH_API` variable in your
SEARCH_API=tavily
```
### Private Knowledgebase
DeerFlow support private knowledgebase such as ragflow and vikingdb, so that you can use your private documents to answer questions.
- **[RAGFlow](https://ragflow.io/docs/dev/)**open source RAG engine
```
# examples in .env.example
RAG_PROVIDER=ragflow
RAGFLOW_API_URL="http://localhost:9388"
RAGFLOW_API_KEY="ragflow-xxx"
RAGFLOW_RETRIEVAL_SIZE=10
```
## Features
### Core Capabilities
@@ -184,23 +199,15 @@ SEARCH_API=tavily
### Tools and MCP Integrations
- 🔍 **Search and Retrieval**
- Web search via Tavily, Brave Search and more
- Crawling with Jina
- Advanced content extraction
- Support for private knowledgebase
- 📃 **RAG Integration**
- Supports mentioning files from [RAGFlow](https://github.com/infiniflow/ragflow) within the input box. [Start up RAGFlow server](https://ragflow.io/docs/dev/).
```bash
# .env
RAG_PROVIDER=ragflow
RAGFLOW_API_URL="http://localhost:9388"
RAGFLOW_API_KEY="ragflow-xxx"
RAGFLOW_RETRIEVAL_SIZE=10
```
- 🔗 **MCP Seamless Integration**
- Expand capabilities for private domain access, knowledge graph, web browsing and more
- Facilitates integration of diverse research tools and methodologies
@@ -208,7 +215,6 @@ SEARCH_API=tavily
### Human Collaboration
- 🧠 **Human-in-the-loop**
- Supports interactive modification of research plans using natural language
- Supports auto-acceptance of research plans
@@ -516,6 +522,7 @@ DeerFlow includes a human in the loop mechanism that allows you to review, edit,
- Via API: Set `auto_accepted_plan: true` in your request
4. **API Integration**: When using the API, you can provide feedback through the `feedback` parameter:
```json
{
"messages": [{ "role": "user", "content": "What is quantum computing?" }],
+43 -34
View File
@@ -11,15 +11,18 @@
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) ist ein Community-getriebenes Framework für tiefgehende Recherche, das auf der großartigen Arbeit der Open-Source-Community aufbaut. Unser Ziel ist es, Sprachmodelle mit spezialisierten Werkzeugen für Aufgaben wie Websuche, Crawling und Python-Code-Ausführung zu kombinieren und gleichzeitig der Community, die dies möglich gemacht hat, etwas zurückzugeben.
Derzeit ist DeerFlow offiziell in das FaaS-Anwendungszentrum von Volcengine eingezogen. Benutzer können es über den Erfahrungslink online erleben, um seine leistungsstarken Funktionen und bequemen Operationen intuitiv zu spüren. Gleichzeitig unterstützt DeerFlow zur Erfüllung der Bereitstellungsanforderungen verschiedener Benutzer die Ein-Klick-Bereitstellung basierend auf Volcengine. Klicken Sie auf den Bereitstellungslink, um den Bereitstellungsprozess schnell abzuschließen und eine effiziente Forschungsreise zu beginnen.
Besuchen Sie [unsere offizielle Website](https://deerflow.tech/) für weitere Details.
## Demo
### Video
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
In dieser Demo zeigen wir, wie man DeerFlow nutzt, um:
- Nahtlos mit MCP-Diensten zu integrieren
- Den Prozess der tiefgehenden Recherche durchzuführen und einen umfassenden Bericht mit Bildern zu erstellen
- Podcast-Audio basierend auf dem generierten Bericht zu erstellen
@@ -34,7 +37,6 @@ In dieser Demo zeigen wir, wie man DeerFlow nutzt, um:
---
## 📑 Inhaltsverzeichnis
- [🚀 Schnellstart](#schnellstart)
@@ -48,12 +50,12 @@ In dieser Demo zeigen wir, wie man DeerFlow nutzt, um:
- [💖 Danksagungen](#danksagungen)
- [⭐ Star-Verlauf](#star-verlauf)
## Schnellstart
DeerFlow ist in Python entwickelt und kommt mit einer in Node.js geschriebenen Web-UI. Um einen reibungslosen Einrichtungsprozess zu gewährleisten, empfehlen wir die Verwendung der folgenden Tools:
### Empfohlene Tools
- **[`uv`](https://docs.astral.sh/uv/getting-started/installation/):**
Vereinfacht die Verwaltung von Python-Umgebungen und Abhängigkeiten. `uv` erstellt automatisch eine virtuelle Umgebung im Stammverzeichnis und installiert alle erforderlichen Pakete für Sie—keine manuelle Installation von Python-Umgebungen notwendig.
@@ -64,11 +66,14 @@ DeerFlow ist in Python entwickelt und kommt mit einer in Node.js geschriebenen W
Installieren und verwalten Sie Abhängigkeiten des Node.js-Projekts.
### Umgebungsanforderungen
Stellen Sie sicher, dass Ihr System die folgenden Mindestanforderungen erfüllt:
- **[Python](https://www.python.org/downloads/):** Version `3.12+`
- **[Node.js](https://nodejs.org/en/download/):** Version `22+`
### Installation
```bash
# Repository klonen
git clone https://github.com/bytedance/deer-flow.git
@@ -136,25 +141,24 @@ bootstrap.bat -d
Weitere Details finden Sie im Verzeichnis [`web`](./web/).
## Unterstützte Suchmaschinen
DeerFlow unterstützt mehrere Suchmaschinen, die in Ihrer `.env`-Datei über die Variable `SEARCH_API` konfiguriert werden können:
- **Tavily** (Standard): Eine spezialisierte Such-API für KI-Anwendungen
- Erfordert `TAVILY_API_KEY` in Ihrer `.env`-Datei
- Registrieren Sie sich unter: https://app.tavily.com/home
- Erfordert `TAVILY_API_KEY` in Ihrer `.env`-Datei
- Registrieren Sie sich unter: <https://app.tavily.com/home>
- **DuckDuckGo**: Datenschutzorientierte Suchmaschine
- Kein API-Schlüssel erforderlich
- Kein API-Schlüssel erforderlich
- **Brave Search**: Datenschutzorientierte Suchmaschine mit erweiterten Funktionen
- Erfordert `BRAVE_SEARCH_API_KEY` in Ihrer `.env`-Datei
- Registrieren Sie sich unter: https://brave.com/search/api/
- Erfordert `BRAVE_SEARCH_API_KEY` in Ihrer `.env`-Datei
- Registrieren Sie sich unter: <https://brave.com/search/api/>
- **Arxiv**: Wissenschaftliche Papiersuche für akademische Forschung
- Kein API-Schlüssel erforderlich
- Spezialisiert auf wissenschaftliche und akademische Papiere
- Kein API-Schlüssel erforderlich
- Spezialisiert auf wissenschaftliche und akademische Papiere
Um Ihre bevorzugte Suchmaschine zu konfigurieren, setzen Sie die Variable `SEARCH_API` in Ihrer `.env`-Datei:
@@ -168,40 +172,39 @@ SEARCH_API=tavily
### Kernfähigkeiten
- 🤖 **LLM-Integration**
- Unterstützt die Integration der meisten Modelle über [litellm](https://docs.litellm.ai/docs/providers).
- Unterstützung für Open-Source-Modelle wie Qwen
- OpenAI-kompatible API-Schnittstelle
- Mehrstufiges LLM-System für unterschiedliche Aufgabenkomplexitäten
- Unterstützt die Integration der meisten Modelle über [litellm](https://docs.litellm.ai/docs/providers).
- Unterstützung für Open-Source-Modelle wie Qwen
- OpenAI-kompatible API-Schnittstelle
- Mehrstufiges LLM-System für unterschiedliche Aufgabenkomplexitäten
### Tools und MCP-Integrationen
- 🔍 **Suche und Abruf**
- Websuche über Tavily, Brave Search und mehr
- Crawling mit Jina
- Fortgeschrittene Inhaltsextraktion
- Websuche über Tavily, Brave Search und mehr
- Crawling mit Jina
- Fortgeschrittene Inhaltsextraktion
- 🔗 **MCP Nahtlose Integration**
- Erweiterte Fähigkeiten für privaten Domänenzugriff, Wissensgraphen, Webbrowsing und mehr
- Erleichtert die Integration verschiedener Forschungswerkzeuge und -methoden
- Erweiterte Fähigkeiten für privaten Domänenzugriff, Wissensgraphen, Webbrowsing und mehr
- Erleichtert die Integration verschiedener Forschungswerkzeuge und -methoden
### Menschliche Zusammenarbeit
- 🧠 **Mensch-in-der-Schleife**
- Unterstützt interaktive Modifikation von Forschungsplänen mit natürlicher Sprache
- Unterstützt automatische Akzeptanz von Forschungsplänen
- Unterstützt interaktive Modifikation von Forschungsplänen mit natürlicher Sprache
- Unterstützt automatische Akzeptanz von Forschungsplänen
- 📝 **Bericht-Nachbearbeitung**
- Unterstützt Notion-ähnliche Blockbearbeitung
- Ermöglicht KI-Verfeinerungen, einschließlich KI-unterstützter Polierung, Satzkürzung und -erweiterung
- Angetrieben von [tiptap](https://tiptap.dev/)
- Unterstützt Notion-ähnliche Blockbearbeitung
- Ermöglicht KI-Verfeinerungen, einschließlich KI-unterstützter Polierung, Satzkürzung und -erweiterung
- Angetrieben von [tiptap](https://tiptap.dev/)
### Inhaltserstellung
- 🎙️ **Podcast- und Präsentationserstellung**
- KI-gestützte Podcast-Skripterstellung und Audiosynthese
- Automatisierte Erstellung einfacher PowerPoint-Präsentationen
- Anpassbare Vorlagen für maßgeschneiderte Inhalte
- KI-gestützte Podcast-Skripterstellung und Audiosynthese
- Automatisierte Erstellung einfacher PowerPoint-Präsentationen
- Anpassbare Vorlagen für maßgeschneiderte Inhalte
## Architektur
@@ -253,7 +256,6 @@ curl --location 'http://localhost:8000/api/tts' \
--output speech.mp3
```
## Entwicklung
### Testen
@@ -311,9 +313,10 @@ langgraph dev
```
Nach dem Start des LangGraph-Servers sehen Sie mehrere URLs im Terminal:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API-Dokumentation: http://127.0.0.1:2024/docs
- API: <http://127.0.0.1:2024>
- Studio UI: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- API-Dokumentation: <http://127.0.0.1:2024/docs>
Öffnen Sie den Studio UI-Link in Ihrem Browser, um auf die Debugging-Schnittstelle zuzugreifen.
@@ -328,6 +331,7 @@ In der Studio UI können Sie:
5. Feedback während der Planungsphase geben, um Forschungspläne zu verfeinern
Wenn Sie ein Forschungsthema in der Studio UI einreichen, können Sie die gesamte Workflow-Ausführung sehen, einschließlich:
- Die Planungsphase, in der der Forschungsplan erstellt wird
- Die Feedback-Schleife, in der Sie den Plan ändern können
- Die Forschungs- und Schreibphasen für jeden Abschnitt
@@ -338,6 +342,7 @@ Wenn Sie ein Forschungsthema in der Studio UI einreichen, können Sie die gesamt
DeerFlow unterstützt LangSmith-Tracing, um Ihnen beim Debuggen und Überwachen Ihrer Workflows zu helfen. Um LangSmith-Tracing zu aktivieren:
1. Stellen Sie sicher, dass Ihre `.env`-Datei die folgenden Konfigurationen enthält (siehe `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
@@ -346,6 +351,7 @@ DeerFlow unterstützt LangSmith-Tracing, um Ihnen beim Debuggen und Überwachen
```
2. Starten Sie das Tracing mit LangSmith lokal, indem Sie folgenden Befehl ausführen:
```bash
langgraph dev
```
@@ -419,6 +425,7 @@ uv run main.py --help
Die Anwendung unterstützt jetzt einen interaktiven Modus mit eingebauten Fragen in Englisch und Chinesisch:
1. Starten Sie den interaktiven Modus:
```bash
uv run main.py --interactive
```
@@ -444,6 +451,7 @@ DeerFlow enthält einen Mensch-in-der-Schleife-Mechanismus, der es Ihnen ermögl
- Über API: Setzen Sie `auto_accepted_plan: true` in Ihrer Anfrage
4. **API-Integration**: Bei Verwendung der API können Sie Feedback über den Parameter `feedback` geben:
```json
{
"messages": [{"role": "user", "content": "Was ist Quantencomputing?"}],
@@ -483,6 +491,7 @@ Wir möchten unsere aufrichtige Wertschätzung den folgenden Projekten für ihre
Diese Projekte veranschaulichen die transformative Kraft der Open-Source-Zusammenarbeit, und wir sind stolz darauf, auf ihren Grundlagen aufzubauen.
### Hauptmitwirkende
Ein herzliches Dankeschön geht an die Hauptautoren von `DeerFlow`, deren Vision, Leidenschaft und Engagement dieses Projekt zum Leben erweckt haben:
- **[Daniel Walnut](https://github.com/hetaoBackend/)**
@@ -492,4 +501,4 @@ Ihr unerschütterliches Engagement und Fachwissen waren die treibende Kraft hint
## Star-Verlauf
[![Star History Chart](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
+12 -7
View File
@@ -11,13 +11,15 @@
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) es un marco de Investigación Profunda impulsado por la comunidad que se basa en el increíble trabajo de la comunidad de código abierto. Nuestro objetivo es combinar modelos de lenguaje con herramientas especializadas para tareas como búsqueda web, rastreo y ejecución de código Python, mientras devolvemos a la comunidad que hizo esto posible.
Actualmente, DeerFlow ha ingresado oficialmente al Centro de Aplicaciones FaaS de Volcengine. Los usuarios pueden experimentarlo en línea a través del enlace de experiencia para sentir intuitivamente sus potentes funciones y operaciones convenientes. Al mismo tiempo, para satisfacer las necesidades de implementación de diferentes usuarios, DeerFlow admite la implementación con un clic basada en Volcengine. Haga clic en el enlace de implementación para completar rápidamente el proceso de implementación y comenzar un viaje de investigación eficiente.
Por favor, visita [nuestra página web oficial](https://deerflow.tech/) para más detalles.
## Demostración
### Video
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
En esta demostración, mostramos cómo usar DeerFlow para:
@@ -148,7 +150,7 @@ DeerFlow soporta múltiples motores de búsqueda que pueden configurarse en tu a
- **Tavily** (predeterminado): Una API de búsqueda especializada para aplicaciones de IA
- Requiere `TAVILY_API_KEY` en tu archivo `.env`
- Regístrate en: https://app.tavily.com/home
- Regístrate en: <https://app.tavily.com/home>
- **DuckDuckGo**: Motor de búsqueda centrado en la privacidad
@@ -157,7 +159,7 @@ DeerFlow soporta múltiples motores de búsqueda que pueden configurarse en tu a
- **Brave Search**: Motor de búsqueda centrado en la privacidad con características avanzadas
- Requiere `BRAVE_SEARCH_API_KEY` en tu archivo `.env`
- Regístrate en: https://brave.com/search/api/
- Regístrate en: <https://brave.com/search/api/>
- **Arxiv**: Búsqueda de artículos científicos para investigación académica
- No requiere clave API
@@ -323,9 +325,9 @@ langgraph dev
Después de iniciar el servidor LangGraph, verás varias URLs en la terminal:
- API: http://127.0.0.1:2024
- UI de Studio: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- Docs de API: http://127.0.0.1:2024/docs
- API: <http://127.0.0.1:2024>
- UI de Studio: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- Docs de API: <http://127.0.0.1:2024/docs>
Abre el enlace de UI de Studio en tu navegador para acceder a la interfaz de depuración.
@@ -351,6 +353,7 @@ Cuando envías un tema de investigación en la UI de Studio, podrás ver toda la
DeerFlow soporta el rastreo de LangSmith para ayudarte a depurar y monitorear tus flujos de trabajo. Para habilitar el rastreo de LangSmith:
1. Asegúrate de que tu archivo `.env` tenga las siguientes configuraciones (ver `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
@@ -359,6 +362,7 @@ DeerFlow soporta el rastreo de LangSmith para ayudarte a depurar y monitorear tu
```
2. Inicia el rastreo y visualiza el grafo localmente con LangSmith ejecutando:
```bash
langgraph dev
```
@@ -502,6 +506,7 @@ DeerFlow incluye un mecanismo de humano en el bucle que te permite revisar, edit
- Vía API: Establece `auto_accepted_plan: true` en tu solicitud
4. **Integración API**: Cuando uses la API, puedes proporcionar retroalimentación a través del parámetro `feedback`:
```json
{
"messages": [{ "role": "user", "content": "¿Qué es la computación cuántica?" }],
@@ -551,4 +556,4 @@ Su compromiso inquebrantable y experiencia han sido la fuerza impulsora detrás
## Historial de Estrellas
[![Gráfico de Historial de Estrellas](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
[![Gráfico de Historial de Estrellas](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
+54 -31
View File
@@ -9,17 +9,19 @@
**DeerFlow****D**eep **E**xploration and **E**fficient **R**esearch **Flow**)は、オープンソースコミュニティの素晴らしい成果の上に構築されたコミュニティ主導の深層研究フレームワークです。私たちの目標は、言語モデルとウェブ検索、クローリング、Python コード実行などの専門ツールを組み合わせながら、これを可能にしたコミュニティに貢献することです。
現在、DeerFlow は火山引擎の FaaS アプリケーションセンターに正式に入居しています。ユーザーは体験リンクを通じてオンラインで体験し、その強力な機能と便利な操作を直感的に感じることができます。同時に、さまざまなユーザーの展開ニーズを満たすため、DeerFlow は火山引擎に基づくワンクリック展開をサポートしています。展開リンクをクリックして展開プロセスを迅速に完了し、効率的な研究の旅を始めましょう。
詳細については[DeerFlow の公式ウェブサイト](https://deerflow.tech/)をご覧ください。
## デモ
### ビデオ
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
このデモでは、DeerFlow の使用方法を紹介しています:
このデモでは、DeerFlowの使用方法を紹介しています:
- MCP サービスとのシームレスな統合
- MCPサービスとのシームレスな統合
- 深層研究プロセスの実施と画像を含む包括的なレポートの作成
- 生成されたレポートに基づくポッドキャストオーディオの作成
@@ -143,21 +145,18 @@ bootstrap.bat -d
DeerFlow は複数の検索エンジンをサポートしており、`.env`ファイルの`SEARCH_API`変数で設定できます:
- **Tavily**(デフォルト):AI アプリケーション向けの専門検索 API
- `.env`ファイルに`TAVILY_API_KEY`が必要
- 登録先:https://app.tavily.com/home
- 登録先:<https://app.tavily.com/home>
- **DuckDuckGo**:プライバシー重視の検索エンジン
- API キー不要
- APIキー不要
- **Brave Search**:高度な機能を備えたプライバシー重視の検索エンジン
- `.env`ファイルに`BRAVE_SEARCH_API_KEY`が必要
- 登録先:https://brave.com/search/api/
- 登録先:<https://brave.com/search/api/>
- **Arxiv**:学術研究用の科学論文検索
- API キー不要
- APIキー不要
- 科学・学術論文専用
お好みの検索エンジンを設定するには、`.env`ファイルで`SEARCH_API`変数を設定します:
@@ -171,41 +170,39 @@ SEARCH_API=tavily
### コア機能
- 🤖 **LLM 統合**
- 🤖 **LLM統合**
- [litellm](https://docs.litellm.ai/docs/providers)を通じてほとんどのモデルの統合をサポート
- Qwen などのオープンソースモデルをサポート
- OpenAI 互換の API インターフェース
- 異なるタスクの複雑さに対応するマルチティア LLM システム
- Qwenなどのオープンソースモデルをサポート
- OpenAI互換のAPIインターフェース
- 異なるタスクの複雑さに対応するマルチティアLLMシステム
### ツールと MCP 統合
- 🔍 **検索と取得**
- Tavily、Brave Search などを通じた Web 検索
- Jina を使用したクローリング
- Tavily、Brave Searchなどを通じたWeb検索
- Jinaを使用したクローリング
- 高度なコンテンツ抽出
- 🔗 **MCP シームレス統合**
- プライベートドメインアクセス、ナレッジグラフ、Web ブラウジングなどの機能を拡張
- 🔗 **MCPシームレス統合**
- プライベートドメインアクセス、ナレッジグラフ、Webブラウジングなどの機能を拡張
- 多様な研究ツールと方法論の統合を促進
### 人間との協力
- 🧠 **人間参加型ループ**
- 自然言語を使用した研究計画の対話的修正をサポート
- 研究計画の自動承認をサポート
- 📝 **レポート後編集**
- Notion ライクなブロック編集をサポート
- AI 支援による洗練、文の短縮、拡張などの AI 改良を可能に
- Notionライクなブロック編集をサポート
- AI支援による洗練、文の短縮、拡張などのAI改良を可能に
- [tiptap](https://tiptap.dev/)を活用
### コンテンツ作成
- 🎙️ **ポッドキャストとプレゼンテーション生成**
- AI 駆動のポッドキャストスクリプト生成と音声合成
- シンプルな PowerPoint プレゼンテーションの自動作成
- AI駆動のポッドキャストスクリプト生成と音声合成
- シンプルなPowerPointプレゼンテーションの自動作成
- カスタマイズ可能なテンプレートで個別のコンテンツに対応
## アーキテクチャ
@@ -241,6 +238,27 @@ DeerFlow は、自動研究とコード分析のためのモジュラーなマ
- 収集した情報を処理および構造化
- 包括的な研究レポートを生成
## テキスト読み上げ統合
DeerFlowには現在、研究レポートを音声に変換できるテキスト読み上げ(TTS)機能が含まれています。この機能は火山引擎TTS APIを使用して高品質なテキストオーディオを生成します。速度、音量、ピッチなどの特性もカスタマイズ可能です。
### TTS APIの使用
`/api/tts`エンドポイントからTTS機能にアクセスできます:
```bash
# curlを使用したAPI呼び出し例
curl --location 'http://localhost:8000/api/tts' \
--header 'Content-Type: application/json' \
--data '{
"text": "これはテキスト読み上げ機能のテストです。",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0
}' \
--output speech.mp3
```
## 開発
### テスト
@@ -297,11 +315,15 @@ pip install -U "langgraph-cli[inmem]"
langgraph dev
```
LangGraph サーバーを開始すると、端末にいくつかの URL が表示されます:
LangGraphサーバーを開始すると、端末にいくつかのURLが表示されます:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API ドキュメント: http://127.0.0.1:2024/docs
- API: <http://127.0.0.1:2024>
- Studio UI: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- APIドキュメント: <http://127.0.0.1:2024/docs>
- API: <http://127.0.0.1:2024>
- Studio UI: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- APIドキュメント: <http://127.0.0.1:2024/docs>
ブラウザで Studio UI リンクを開いてデバッグインターフェースにアクセスします。
@@ -315,7 +337,7 @@ Studio UI では、次のことができます:
4. 各コンポーネントの入力と出力を検査して問題をデバッグ
5. 計画段階でフィードバックを提供して研究計画を洗練
Studio UI で研究トピックを送信すると、次を含む全ワークフロー実行プロセスを見ることができます:
Studio UIで研究トピックを送信すると、次を含む全ワークフロー実行プロセスを見ることができます:
- 研究計画を作成する計画段階
- 計画を修正できるフィードバックループ
@@ -327,6 +349,7 @@ Studio UI で研究トピックを送信すると、次を含む全ワークフ
DeerFlow は LangSmith トレース機能をサポートしており、ワークフローのデバッグとモニタリングに役立ちます。LangSmith トレースを有効にするには:
1. `.env` ファイルに次の設定があることを確認してください(`.env.example` を参照):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
@@ -335,6 +358,7 @@ DeerFlow は LangSmith トレース機能をサポートしており、ワーク
```
2. 次のコマンドを実行して LangSmith トレースを開始します:
```bash
langgraph dev
```
@@ -496,9 +520,8 @@ DeerFlow には人間参加型ループメカニズムが含まれており、
3. **自動承認**:レビュープロセスをスキップするために自動承認を有効にできます:
- API 経由:リクエストで`auto_accepted_plan: true`を設定
4. **API統合**APIを使用する場合、`feedback`パラメータでフィードバックを提供できます:
4. **API 統合**API を使用する場合、`feedback`パラメータでフィードバックを提供できます:
```json
{
"messages": [
+11 -11
View File
@@ -12,13 +12,15 @@
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) é um framework de Pesquisa Profunda orientado-a-comunidade que baseia-se em um íncrivel trabalho da comunidade open source. Nosso objetivo é combinar modelos de linguagem com ferramentas especializadas para tarefas como busca na web, crawling, e execução de código Python, enquanto retribui com a comunidade que o tornou possível.
Atualmente, o DeerFlow entrou oficialmente no Centro de Aplicações FaaS da Volcengine. Os usuários podem experimentá-lo online através do link de experiência para sentir intuitivamente suas funções poderosas e operações convenientes. Ao mesmo tempo, para atender às necessidades de implantação de diferentes usuários, o DeerFlow suporta implantação com um clique baseada na Volcengine. Clique no link de implantação para completar rapidamente o processo de implantação e iniciar uma jornada de pesquisa eficiente.
Por favor, visite [Nosso Site Oficial](https://deerflow.tech/) para maiores detalhes.
## Demo
### Video
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
Nesse demo, nós demonstramos como usar o DeerFlow para:
In this demo, we showcase how to use DeerFlow to:
@@ -146,13 +148,12 @@ Explore mais detalhes no diretório [`web`](./web/) .
## Mecanismos de Busca Suportados
DeerFlow suporta múltiplos mecanismos de busca que podem ser configurados no seu arquivo `.env` usando a variável `SEARCH_API`:
- **Tavily** (padrão): Uma API de busca especializada para aplicações de IA
- Requer `TAVILY_API_KEY` no seu arquivo `.env`
- Inscreva-se em: https://app.tavily.com/home
- Inscreva-se em: <https://app.tavily.com/home>
- **DuckDuckGo**: Mecanismo de busca focado em privacidade
@@ -161,7 +162,7 @@ DeerFlow suporta múltiplos mecanismos de busca que podem ser configurados no se
- **Brave Search**: Mecanismo de busca focado em privacidade com funcionalidades avançadas
- Requer `BRAVE_SEARCH_API_KEY` no seu arquivo `.env`
- Inscreva-se em: https://brave.com/search/api/
- Inscreva-se em: <https://brave.com/search/api/>
- **Arxiv**: Busca de artigos científicos para pesquisa acadêmica
- Não requer chave API
@@ -202,7 +203,6 @@ SEARCH_API=tavily
- 🧠 **Humano-no-processo**
- Suporta modificação interativa de planos de pesquisa usando linguagem natural
- Suporta auto-aceite de planos de pesquisa
@@ -331,9 +331,9 @@ langgraph dev
Após iniciar o servidor LangGraph, você verá diversas URLs no seu terminal:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API Docs: http://127.0.0.1:2024/docs
- API: <http://127.0.0.1:2024>
- Studio UI: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- API Docs: <http://127.0.0.1:2024/docs>
Abra o link do Studio UI no seu navegador para acessar a interface de depuração.
@@ -341,7 +341,6 @@ Abra o link do Studio UI no seu navegador para acessar a interface de depuraçã
No Studio UI, você pode:
1. Visualizar o grafo do fluxo de trabalho e como seus componentes se conectam
2. Rastrear a execução em tempo-real e ver como os dados fluem através do sistema
3. Inspecionar o estado de cada passo do fluxo de trabalho
@@ -389,7 +388,7 @@ docker compose build
docker compose up
```
## Exemplos:
## Exemplos
Os seguintes exemplos demonstram as capacidades do DeerFlow:
@@ -492,7 +491,8 @@ DeerFlow inclue um mecanismo de humano no processo que permite a você revisar,
- Via API: Defina `auto_accepted_plan: true` na sua requisição
4. **Integração de API**: Quanto usar a API, você pode fornecer um feedback através do parâmetro `feedback`:
4. **Integração de API**: Quanto usar a API, você pode fornecer um feedback através do parâmetro `feedback`:
```json
{
"messages": [{ "role": "user", "content": "O que é computação quântica?" }],
+11 -6
View File
@@ -11,13 +11,15 @@
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) - это фреймворк для глубокого исследования, разработанный сообществом и основанный на впечатляющей работе сообщества открытого кода. Наша цель - объединить языковые модели со специализированными инструментами для таких задач, как веб-поиск, сканирование и выполнение кода Python, одновременно возвращая пользу сообществу, которое сделало это возможным.
В настоящее время DeerFlow официально вошел в Центр приложений FaaS Volcengine. Пользователи могут испытать его онлайн через ссылку для опыта, чтобы интуитивно почувствовать его мощные функции и удобные операции. В то же время, для удовлетворения потребностей развертывания различных пользователей, DeerFlow поддерживает развертывание одним кликом на основе Volcengine. Нажмите на ссылку развертывания, чтобы быстро завершить процесс развертывания и начать эффективное исследовательское путешествие.
Пожалуйста, посетите [наш официальный сайт](https://deerflow.tech/) для получения дополнительной информации.
## Демонстрация
### Видео
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
В этой демонстрации мы показываем, как использовать DeerFlow для:
@@ -148,7 +150,7 @@ DeerFlow поддерживает несколько поисковых сист
- **Tavily** (по умолчанию): Специализированный поисковый API для приложений ИИ
- Требуется `TAVILY_API_KEY` в вашем файле `.env`
- Зарегистрируйтесь на: https://app.tavily.com/home
- Зарегистрируйтесь на: <https://app.tavily.com/home>
- **DuckDuckGo**: Поисковая система, ориентированная на конфиденциальность
@@ -157,7 +159,7 @@ DeerFlow поддерживает несколько поисковых сист
- **Brave Search**: Поисковая система, ориентированная на конфиденциальность, с расширенными функциями
- Требуется `BRAVE_SEARCH_API_KEY` в вашем файле `.env`
- Зарегистрируйтесь на: https://brave.com/search/api/
- Зарегистрируйтесь на: <https://brave.com/search/api/>
- **Arxiv**: Поиск научных статей для академических исследований
- Не требуется API-ключ
@@ -323,9 +325,9 @@ langgraph dev
После запуска сервера LangGraph вы увидите несколько URL в терминале:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API Docs: http://127.0.0.1:2024/docs
- API: <http://127.0.0.1:2024>
- Studio UI: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- API Docs: <http://127.0.0.1:2024/docs>
Откройте ссылку Studio UI в вашем браузере для доступа к интерфейсу отладки.
@@ -351,6 +353,7 @@ langgraph dev
DeerFlow поддерживает трассировку LangSmith, чтобы помочь вам отладить и контролировать ваши рабочие процессы. Чтобы включить трассировку LangSmith:
1. Убедитесь, что в вашем файле `.env` есть следующие конфигурации (см. `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
@@ -359,6 +362,7 @@ DeerFlow поддерживает трассировку LangSmith, чтобы
```
2. Запустите трассировку и визуализируйте граф локально с LangSmith, выполнив:
```bash
langgraph dev
```
@@ -502,6 +506,7 @@ DeerFlow включает механизм "человек в контуре",
- Через API: Установите `auto_accepted_plan: true` в вашем запросе
4. **Интеграция API**: При использовании API вы можете предоставить обратную связь через параметр `feedback`:
```json
{
"messages": [{ "role": "user", "content": "Что такое квантовые вычисления?" }],
+70 -27
View File
@@ -9,13 +9,15 @@
**DeerFlow****D**eep **E**xploration and **E**fficient **R**esearch **Flow**)是一个社区驱动的深度研究框架,它建立在开源社区的杰出工作基础之上。我们的目标是将语言模型与专业工具(如网络搜索、爬虫和 Python 代码执行)相结合,同时回馈使这一切成为可能的社区。
目前,DeerFlow 已正式入驻[火山引擎的 FaaS 应用中心](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/market),用户可通过[体验链接](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/market/deerflow/?channel=github&source=deerflow)进行在线体验,直观感受其强大功能与便捷操作;同时,为满足不同用户的部署需求,DeerFlow 支持基于火山引擎一键部署,点击[部署链接](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/application/create?templateId=683adf9e372daa0008aaed5c&channel=github&source=deerflow)即可快速完成部署流程,开启高效研究之旅。
请访问[DeerFlow 的官方网站](https://deerflow.tech/)了解更多详情。
## 演示
### 视频
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
<https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e>
在此演示中,我们展示了如何使用 DeerFlow:
@@ -44,7 +46,7 @@ https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
- [❓ 常见问题](#常见问题)
- [📜 许可证](#许可证)
- [💖 致谢](#致谢)
- [⭐ Star History](#star-History)
- [⭐ Star History](#star-history)
## 快速开始
@@ -106,7 +108,7 @@ pnpm install
请参阅[配置指南](docs/configuration_guide.md)获取更多详情。
> [!注意]
> [! 注意]
> 在启动项目之前,请仔细阅读指南,并更新配置以匹配您的特定设置和要求。
### 控制台 UI
@@ -121,8 +123,7 @@ uv run main.py
### Web UI
本项目还包括一个 Web UI,提供更加动态和引人入胜的交互体验。
> [!注意]
> [! 注意]
> 您需要先安装 Web UI 的依赖。
```bash
@@ -140,21 +141,20 @@ bootstrap.bat -d
## 支持的搜索引擎
### 公域搜索引擎
DeerFlow 支持多种搜索引擎,可以在`.env`文件中通过`SEARCH_API`变量进行配置:
- **Tavily**(默认):专为 AI 应用设计的专业搜索 API
- 需要在`.env`文件中设置`TAVILY_API_KEY`
- 注册地址:https://app.tavily.com/home
- 注册地址:<https://app.tavily.com/home>
- **DuckDuckGo**:注重隐私的搜索引擎
- 无需 API 密钥
- **Brave Search**:具有高级功能的注重隐私的搜索引擎
- 需要在`.env`文件中设置`BRAVE_SEARCH_API_KEY`
- 注册地址:https://brave.com/search/api/
- 注册地址:<https://brave.com/search/api/>
- **Arxiv**:用于学术研究的科学论文搜索
- 无需 API 密钥
@@ -167,6 +167,30 @@ DeerFlow 支持多种搜索引擎,可以在`.env`文件中通过`SEARCH_API`
SEARCH_API=tavily
```
### 私域知识库引擎
DeerFlow 支持基于私有域知识的检索,您可以将文档上传到多种私有知识库中,以便在研究过程中使用,当前支持的私域知识库有:
- **[RAGFlow](https://ragflow.io/docs/dev/)**:开源的基于检索增强生成的知识库引擎
```
# 参照示例进行配置 .env.example
RAG_PROVIDER=ragflow
RAGFLOW_API_URL="http://localhost:9388"
RAGFLOW_API_KEY="ragflow-xxx"
RAGFLOW_RETRIEVAL_SIZE=10
```
- **[VikingDB 知识库](https://www.volcengine.com/docs/84313/1254457)**:火山引擎提供的公有云知识库引擎
> 注意先从 [火山引擎](https://www.volcengine.com/docs/84313/1254485) 获取账号 AK/SK
```
# 参照示例进行配置 .env.example
RAG_PROVIDER=vikingdb_knowledge_base
VIKINGDB_KNOWLEDGE_BASE_API_URL="api-knowledgebase.mlp.cn-beijing.volces.com"
VIKINGDB_KNOWLEDGE_BASE_API_AK="volcengine-ak-xxx"
VIKINGDB_KNOWLEDGE_BASE_API_SK="volcengine-sk-xxx"
VIKINGDB_KNOWLEDGE_BASE_RETRIEVAL_SIZE=15
```
## 特性
### 核心能力
@@ -180,10 +204,14 @@ SEARCH_API=tavily
### 工具和 MCP 集成
- 🔍 **搜索和检索**
- 通过 Tavily、Brave Search 等进行网络搜索
- 使用 Jina 进行爬取
- 高级内容提取
- 支持检索指定私有知识库
- 📃 **RAG 集成**
- 支持 [RAGFlow](https://github.com/infiniflow/ragflow) 知识库
- 支持 [VikingDB](https://www.volcengine.com/docs/84313/1254457) 火山知识库
- 🔗 **MCP 无缝集成**
- 扩展私有域访问、知识图谱、网页浏览等能力
@@ -192,7 +220,6 @@ SEARCH_API=tavily
### 人机协作
- 🧠 **人在环中**
- 支持使用自然语言交互式修改研究计划
- 支持自动接受研究计划
@@ -231,16 +258,36 @@ DeerFlow 实现了一个模块化的多智能体系统架构,专为自动化
- 管理研究流程并决定何时生成最终报告
3. **研究团队**:执行计划的专业智能体集合:
- **研究员**:使用网络搜索引擎、爬虫甚至 MCP 服务等工具进行网络搜索和信息收集。
- **编码员**:使用 Python REPL 工具处理代码分析、执行和技术任务。
每个智能体都可以访问针对其角色优化的特定工具,并在 LangGraph 框架内运行
每个智能体都可以访问针对其角色优化的特定工具,并在 LangGraph 框架内运行
4. **报告员**:研究输出的最终阶段处理器
- 汇总研究团队的发现
- 处理和组织收集的信息
- 生成全面的研究报告
## 文本转语音集成
DeerFlow 现在包含一个文本转语音 (TTS) 功能,允许您将研究报告转换为语音。此功能使用火山引擎 TTS API 生成高质量的文本音频。速度、音量和音调等特性也可以自定义。
### 使用 TTS API
您可以通过`/api/tts`端点访问 TTS 功能:
```bash
# 使用curl的API调用示例
curl --location 'http://localhost:8000/api/tts' \
--header 'Content-Type: application/json' \
--data '{
"text": "这是文本转语音功能的测试。",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0
}' \
--output speech.mp3
```
## 开发
### 测试
@@ -299,9 +346,9 @@ langgraph dev
启动 LangGraph 服务器后,您将在终端中看到几个 URL:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API 文档: http://127.0.0.1:2024/docs
- API: <http://127.0.0.1:2024>
- Studio UI: <https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024>
- API 文档<http://127.0.0.1:2024/docs>
在浏览器中打开 Studio UI 链接以访问调试界面。
@@ -327,6 +374,7 @@ langgraph dev
DeerFlow 支持 LangSmith 追踪功能,帮助您调试和监控工作流。要启用 LangSmith 追踪:
1. 确保您的 `.env` 文件中有以下配置(参见 `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
@@ -335,6 +383,7 @@ DeerFlow 支持 LangSmith 追踪功能,帮助您调试和监控工作流。要
```
2. 通过运行以下命令本地启动 LangSmith 追踪:
```bash
langgraph dev
```
@@ -377,7 +426,7 @@ docker compose up
## 文本转语音集成
DeerFlow 现在包含一个文本转语音(TTS)功能,允许您将研究报告转换为语音。此功能使用火山引擎 TTS API 生成高质量的文本音频。速度、音量和音调等特性也可以自定义。
DeerFlow 现在包含一个文本转语音 (TTS) 功能,允许您将研究报告转换为语音。此功能使用火山引擎 TTS API 生成高质量的文本音频。速度、音量和音调等特性也可以自定义。
### 使用 TTS API
@@ -403,17 +452,14 @@ curl --location 'http://localhost:8000/api/tts' \
### 研究报告
1. **OpenAI Sora 报告** - OpenAI 的 Sora AI 工具分析
- 讨论功能、访问方式、提示工程、限制和伦理考虑
- [查看完整报告](examples/openai_sora_report.md)
2. **Google 的 Agent to Agent 协议报告** - Google 的 Agent to Agent (A2A)协议概述
- 讨论其在 AI 智能体通信中的作用及其与 Anthropic 的 Model Context Protocol (MCP)的关系
2. **Google 的 Agent to Agent 协议报告** - Google 的 Agent to Agent (A2A) 协议概述
- 讨论其在 AI 智能体通信中的作用及其与 Anthropic 的 Model Context Protocol (MCP) 的关系
- [查看完整报告](examples/what_is_agent_to_agent_protocol.md)
3. **什么是 MCP** - 对"MCP"一词在多个上下文中的全面分析
- 探讨 AI 中的 Model Context Protocol、化学中的 Monocalcium Phosphate 和电子学中的 Micro-channel Plate
- [查看完整报告](examples/what_is_mcp.md)
@@ -424,17 +470,14 @@ curl --location 'http://localhost:8000/api/tts' \
- [查看完整报告](examples/bitcoin_price_fluctuation.md)
5. **什么是 LLM?** - 对大型语言模型的深入探索
- 讨论架构、训练、应用和伦理考虑
- [查看完整报告](examples/what_is_llm.md)
6. **如何使用 Claude 进行深度研究?** - 在深度研究中使用 Claude 的最佳实践和工作流程
- 涵盖提示工程、数据分析和与其他工具的集成
- [查看完整报告](examples/how_to_use_claude_deep_research.md)
7. **医疗保健中的 AI 采用:影响因素** - 影响医疗保健中 AI 采用的因素分析
- 讨论 AI 技术、数据质量、伦理考虑、经济评估、组织准备度和数字基础设施
- [查看完整报告](examples/AI_adoption_in_healthcare.md)
@@ -495,10 +538,10 @@ DeerFlow 包含一个人在环中机制,允许您在执行研究计划前审
- 系统将整合您的反馈并生成修订后的计划
3. **自动接受**:您可以启用自动接受以跳过审查过程:
- 通过 API:在请求中设置`auto_accepted_plan: true`
4. **API 集成**:使用 API 时,您可以通过`feedback`参数提供反馈:
```json
{
"messages": [{ "role": "user", "content": "什么是量子计算?" }],
+34 -3
View File
@@ -1,9 +1,40 @@
# [!NOTE]
# Read the `docs/configuration_guide.md` carefully, and update the configurations to match your specific settings and requirements.
# - Replace `api_key` with your own credentials
# - Replace `base_url` and `model` name if you want to use a custom model
# Read the `docs/configuration_guide.md` carefully, and update the
# configurations to match your specific settings and requirements.
# - Replace `api_key` with your own credentials.
# - Replace `base_url` and `model` name if you want to use a custom model.
# - Set `verify_ssl` to `false` if your LLM server uses self-signed certificates
# - A restart is required every time you change the `config.yaml` file.
BASIC_MODEL:
base_url: https://ark.cn-beijing.volces.com/api/v3
model: "doubao-1-5-pro-32k-250115"
api_key: xxxx
# max_retries: 3 # Maximum number of retries for LLM calls
# verify_ssl: false # Uncomment this line to disable SSL certificate verification for self-signed certificates
# Reasoning model is optional.
# Uncomment the following settings if you want to use reasoning model
# for planning.
# REASONING_MODEL:
# base_url: https://ark.cn-beijing.volces.com/api/v3
# model: "doubao-1-5-thinking-pro-m-250428"
# api_key: xxxx
# max_retries: 3 # Maximum number of retries for LLM calls
# OTHER SETTINGS:
# Search engine configuration (Only supports Tavily currently)
# SEARCH_ENGINE:
# engine: tavily
# # Only include results from these domains
# include_domains:
# - example.com
# - trusted-news.com
# - reliable-source.org
# - gov.cn
# - edu.cn
# # Exclude results from these domains
# exclude_domains:
# - example.com
+1 -1
View File
@@ -9,7 +9,7 @@ services:
env_file:
- .env
volumes:
- ./conf.yaml:/app/conf.yaml
- ./conf.yaml:/app/conf.yaml:ro
restart: unless-stopped
networks:
- deer-flow-network
+52 -11
View File
@@ -11,7 +11,7 @@ cp conf.yaml.example conf.yaml
## Which models does DeerFlow support?
In DeerFlow, currently we only support non-reasoning models, which means models like OpenAI's o1/o3 or DeepSeek's R1 are not supported yet, but we will add support for them in the future.
In DeerFlow, we currently only support non-reasoning models. This means models like OpenAI's o1/o3 or DeepSeek's R1 are not supported yet, but we plan to add support for them in the future. Additionally, all Gemma-3 models are currently unsupported due to the lack of tool usage capabilities.
### Supported Models
@@ -58,15 +58,31 @@ BASIC_MODEL:
api_key: YOUR_API_KEY
```
### How to use Ollama models?
### How to use models with self-signed SSL certificates?
DeerFlow supports the integration of Ollama models. You can refer to [litellm Ollama](https://docs.litellm.ai/docs/providers/ollama). <br>
The following is a configuration example of `conf.yaml` for using Ollama models:
If your LLM server uses self-signed SSL certificates, you can disable SSL certificate verification by adding the `verify_ssl: false` parameter to your model configuration:
```yaml
BASIC_MODEL:
model: "ollama/ollama-model-name"
base_url: "http://localhost:11434" # Local service address of Ollama, which can be started/viewed via ollama serve
base_url: "https://your-llm-server.com/api/v1"
model: "your-model-name"
api_key: YOUR_API_KEY
verify_ssl: false # Disable SSL certificate verification for self-signed certificates
```
> [!WARNING]
> Disabling SSL certificate verification reduces security and should only be used in development environments or when you trust the LLM server. In production environments, it's recommended to use properly signed SSL certificates.
### How to use Ollama models?
DeerFlow supports the integration of Ollama models. You can refer to [litellm Ollama](https://docs.litellm.ai/docs/providers/ollama). <br>
The following is a configuration example of `conf.yaml` for using Ollama models(you might need to run the 'ollama serve' first):
```yaml
BASIC_MODEL:
model: "model-name" # Model name, which supports the completions API(important), such as: qwen3:8b, mistral-small3.1:24b, qwen2.5:3b
base_url: "http://localhost:11434/v1" # Local service address of Ollama, which can be started/viewed via ollama serve
api_key: "whatever" # Mandatory, fake api_key with a random string you like :-)
```
### How to use OpenRouter models?
@@ -89,13 +105,38 @@ BASIC_MODEL:
Note: The available models and their exact names may change over time. Please verify the currently available models and their correct identifiers in [OpenRouter's official documentation](https://openrouter.ai/docs).
### How to use Azure models?
DeerFlow supports the integration of Azure models. You can refer to [litellm Azure](https://docs.litellm.ai/docs/providers/azure). Configuration example of `conf.yaml`:
### How to use Azure OpenAI chat models?
DeerFlow supports the integration of Azure OpenAI chat models. You can refer to [AzureChatOpenAI](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html). Configuration example of `conf.yaml`:
```yaml
BASIC_MODEL:
model: "azure/gpt-4o-2024-08-06"
api_base: $AZURE_API_BASE
api_version: $AZURE_API_VERSION
api_key: $AZURE_API_KEY
azure_endpoint: $AZURE_OPENAI_ENDPOINT
api_version: $OPENAI_API_VERSION
api_key: $AZURE_OPENAI_API_KEY
```
## About Search Engine
### How to control search domains for Tavily?
DeerFlow allows you to control which domains are included or excluded in Tavily search results through the configuration file. This helps improve search result quality and reduce hallucinations by focusing on trusted sources.
`Tips`: it only supports Tavily currently.
You can configure domain filtering in your `conf.yaml` file as follows:
```yaml
SEARCH_ENGINE:
engine: tavily
# Only include results from these domains (whitelist)
include_domains:
- trusted-news.com
- gov.org
- reliable-source.edu
# Exclude results from these domains (blacklist)
exclude_domains:
- unreliable-site.com
- spam-domain.net
+10 -1
View File
@@ -1,4 +1,13 @@
# MCP Integrations
# MCP IntegrationsBeta
Now This feature is diabled by default. You can enable it by setting the environment ENABLE_MCP_SERVER_CONFIGURATION to be true
> [!WARNING]
> Please enable this feature before securing your frond-end and back-end in a managed environment.
> Otherwise, you system could be compromised.
This feature is diabled by default. You can enable it by setting the environment ENABLE_MCP_SERVER_CONFIGURATION
Please enable this feature before securing your frond-end and back-end in an internal environment.q
## Example of MCP Server Configuration
+5 -1
View File
@@ -140,7 +140,11 @@ if __name__ == "__main__":
if args.query:
user_query = " ".join(args.query)
else:
user_query = input("Enter your query: ")
# Loop until user provides non-empty input
while True:
user_query = input("Enter your query: ")
if user_query is not None and user_query != "":
break
# Run the agent workflow with the provided parameters
ask(
+7
View File
@@ -32,18 +32,25 @@ dependencies = [
"arxiv>=2.2.0",
"mcp>=1.6.0",
"langchain-mcp-adapters>=0.0.9",
"langchain-deepseek>=0.1.3",
"volcengine>=1.0.191",
]
[project.optional-dependencies]
dev = [
"ruff",
"black>=24.2.0",
"langgraph-cli[inmem]>=0.2.10",
]
test = [
"pytest>=7.4.0",
"pytest-cov>=4.1.0",
"pytest-asyncio>=1.0.0",
]
[tool.uv]
required-version = ">=0.6.15"
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
+5 -4
View File
@@ -1,8 +1,8 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from .tools import SELECTED_SEARCH_ENGINE, SearchEngine
from .loader import load_yaml_config
from .tools import SELECTED_SEARCH_ENGINE, SearchEngine
from .questions import BUILT_IN_QUESTIONS, BUILT_IN_QUESTIONS_ZH_CN
from dotenv import load_dotenv
@@ -11,7 +11,7 @@ from dotenv import load_dotenv
load_dotenv()
# Team configuration
TEAM_MEMBER_CONFIGRATIONS = {
TEAM_MEMBER_CONFIGURATIONS = {
"researcher": {
"name": "researcher",
"desc": (
@@ -36,14 +36,15 @@ TEAM_MEMBER_CONFIGRATIONS = {
},
}
TEAM_MEMBERS = list(TEAM_MEMBER_CONFIGRATIONS.keys())
TEAM_MEMBERS = list(TEAM_MEMBER_CONFIGURATIONS.keys())
__all__ = [
# Other configurations
"TEAM_MEMBERS",
"TEAM_MEMBER_CONFIGRATIONS",
"TEAM_MEMBER_CONFIGURATIONS",
"SELECTED_SEARCH_ENGINE",
"SearchEngine",
"BUILT_IN_QUESTIONS",
"BUILT_IN_QUESTIONS_ZH_CN",
load_yaml_config,
]
+1
View File
@@ -23,6 +23,7 @@ class Configuration:
max_search_results: int = 3 # Maximum number of search results
mcp_settings: dict = None # MCP settings, including dynamic loaded tools
report_style: str = ReportStyle.ACADEMIC.value # Report style
enable_deep_thinking: bool = False # Whether to enable deep thinking
@classmethod
def from_runnable_config(
+1
View File
@@ -21,6 +21,7 @@ SELECTED_SEARCH_ENGINE = os.getenv("SEARCH_API", SearchEngine.TAVILY.value)
class RAGProvider(enum.Enum):
RAGFLOW = "ragflow"
VIKINGDB_KNOWLEDGE_BASE = "vikingdb_knowledge_base"
SELECTED_RAG_PROVIDER = os.getenv("RAG_PROVIDER")
-1
View File
@@ -1,7 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import sys
from .article import Article
from .jina_client import JinaClient
+11 -2
View File
@@ -22,14 +22,23 @@ def continue_to_running_research_team(state: State):
current_plan = state.get("current_plan")
if not current_plan or not current_plan.steps:
return "planner"
if all(step.execution_res for step in current_plan.steps):
return "planner"
# Find first incomplete step
incomplete_step = None
for step in current_plan.steps:
if not step.execution_res:
incomplete_step = step
break
if step.step_type and step.step_type == StepType.RESEARCH:
if not incomplete_step:
return "planner"
if incomplete_step.step_type == StepType.RESEARCH:
return "researcher"
if step.step_type and step.step_type == StepType.PROCESSING:
if incomplete_step.step_type == StepType.PROCESSING:
return "coder"
return "planner"
+15 -11
View File
@@ -101,8 +101,10 @@ def planner_node(
}
]
if AGENT_LLM_MAP["planner"] == "basic":
llm = get_llm_by_type(AGENT_LLM_MAP["planner"]).with_structured_output(
if configurable.enable_deep_thinking:
llm = get_llm_by_type("reasoning")
elif AGENT_LLM_MAP["planner"] == "basic":
llm = get_llm_by_type("basic").with_structured_output(
Plan,
method="json_mode",
)
@@ -114,7 +116,7 @@ def planner_node(
return Command(goto="reporter")
full_response = ""
if AGENT_LLM_MAP["planner"] == "basic":
if AGENT_LLM_MAP["planner"] == "basic" and not configurable.enable_deep_thinking:
response = llm.invoke(messages)
full_response = response.model_dump_json(indent=4, exclude_none=True)
else:
@@ -132,7 +134,7 @@ def planner_node(
return Command(goto="reporter")
else:
return Command(goto="__end__")
if curr_plan.get("has_enough_context"):
if isinstance(curr_plan, dict) and curr_plan.get("has_enough_context"):
logger.info("Planner response has enough context.")
new_plan = Plan.model_validate(curr_plan)
return Command(
@@ -184,11 +186,9 @@ def human_feedback_node(
plan_iterations += 1
# parse the plan
new_plan = json.loads(current_plan)
if new_plan["has_enough_context"]:
goto = "reporter"
except json.JSONDecodeError:
logger.warning("Planner response is not a valid JSON")
if plan_iterations > 0:
if plan_iterations > 1: # the plan_iterations is increased before this check
return Command(goto="reporter")
else:
return Command(goto="__end__")
@@ -243,9 +243,12 @@ def coordinator_node(
"Coordinator response contains no tool calls. Terminating workflow execution."
)
logger.debug(f"Coordinator response: {response}")
messages = state.get("messages", [])
if response.content:
messages.append(HumanMessage(content=response.content, name="coordinator"))
return Command(
update={
"messages": messages,
"locale": locale,
"research_topic": research_topic,
"resources": configurable.resources,
@@ -304,6 +307,7 @@ async def _execute_agent_step(
) -> Command[Literal["research_team"]]:
"""Helper function to execute a step using the specified agent."""
current_plan = state.get("current_plan")
plan_title = current_plan.title
observations = state.get("observations", [])
# Find the first unexecuted step
@@ -325,16 +329,16 @@ async def _execute_agent_step(
# Format completed steps information
completed_steps_info = ""
if completed_steps:
completed_steps_info = "# Existing Research Findings\n\n"
completed_steps_info = "# Completed Research Steps\n\n"
for i, step in enumerate(completed_steps):
completed_steps_info += f"## Existing Finding {i + 1}: {step.title}\n\n"
completed_steps_info += f"## Completed Step {i + 1}: {step.title}\n\n"
completed_steps_info += f"<finding>\n{step.execution_res}\n</finding>\n\n"
# Prepare the input for the agent with completed steps info
agent_input = {
"messages": [
HumanMessage(
content=f"{completed_steps_info}# Current Task\n\n## Title\n\n{current_step.title}\n\n## Description\n\n{current_step.description}\n\n## Locale\n\n{state.get('locale', 'en-US')}"
content=f"# Research Topic\n\n{plan_title}\n\n{completed_steps_info}# Current Step\n\n## Title\n\n{current_step.title}\n\n## Description\n\n{current_step.description}\n\n## Locale\n\n{state.get('locale', 'en-US')}"
)
]
}
+1
View File
@@ -1,6 +1,7 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from langgraph.graph import MessagesState
from src.prompts.planner_model import Plan
+94 -16
View File
@@ -4,14 +4,32 @@
from pathlib import Path
from typing import Any, Dict
import os
import httpx
from langchain_openai import ChatOpenAI
from langchain_core.language_models import BaseChatModel
from langchain_openai import ChatOpenAI, AzureChatOpenAI
from langchain_deepseek import ChatDeepSeek
from typing import get_args
from src.config import load_yaml_config
from src.config.agents import LLMType
# Cache for LLM instances
_llm_cache: dict[LLMType, ChatOpenAI] = {}
_llm_cache: dict[LLMType, BaseChatModel] = {}
def _get_config_file_path() -> str:
"""Get the path to the configuration file."""
return str((Path(__file__).parent.parent.parent / "conf.yaml").resolve())
def _get_llm_type_config_keys() -> dict[str, str]:
"""Get mapping of LLM types to their configuration keys."""
return {
"reasoning": "REASONING_MODEL",
"basic": "BASIC_MODEL",
"vision": "VISION_MODEL",
}
def _get_env_llm_conf(llm_type: str) -> Dict[str, Any]:
@@ -29,15 +47,18 @@ def _get_env_llm_conf(llm_type: str) -> Dict[str, Any]:
return conf
def _create_llm_use_conf(llm_type: LLMType, conf: Dict[str, Any]) -> ChatOpenAI:
llm_type_map = {
"reasoning": conf.get("REASONING_MODEL", {}),
"basic": conf.get("BASIC_MODEL", {}),
"vision": conf.get("VISION_MODEL", {}),
}
llm_conf = llm_type_map.get(llm_type)
def _create_llm_use_conf(llm_type: LLMType, conf: Dict[str, Any]) -> BaseChatModel:
"""Create LLM instance using configuration."""
llm_type_config_keys = _get_llm_type_config_keys()
config_key = llm_type_config_keys.get(llm_type)
if not config_key:
raise ValueError(f"Unknown LLM type: {llm_type}")
llm_conf = conf.get(config_key, {})
if not isinstance(llm_conf, dict):
raise ValueError(f"Invalid LLM Conf: {llm_type}")
raise ValueError(f"Invalid LLM configuration for {llm_type}: {llm_conf}")
# Get configuration from environment variables
env_conf = _get_env_llm_conf(llm_type)
@@ -45,28 +66,85 @@ def _create_llm_use_conf(llm_type: LLMType, conf: Dict[str, Any]) -> ChatOpenAI:
merged_conf = {**llm_conf, **env_conf}
if not merged_conf:
raise ValueError(f"Unknown LLM Conf: {llm_type}")
raise ValueError(f"No configuration found for LLM type: {llm_type}")
return ChatOpenAI(**merged_conf)
# Add max_retries to handle rate limit errors
if "max_retries" not in merged_conf:
merged_conf["max_retries"] = 3
if llm_type == "reasoning":
merged_conf["api_base"] = merged_conf.pop("base_url", None)
# Handle SSL verification settings
verify_ssl = merged_conf.pop("verify_ssl", True)
# Create custom HTTP client if SSL verification is disabled
if not verify_ssl:
http_client = httpx.Client(verify=False)
http_async_client = httpx.AsyncClient(verify=False)
merged_conf["http_client"] = http_client
merged_conf["http_async_client"] = http_async_client
if "azure_endpoint" in merged_conf or os.getenv("AZURE_OPENAI_ENDPOINT"):
return AzureChatOpenAI(**merged_conf)
if llm_type == "reasoning":
return ChatDeepSeek(**merged_conf)
else:
return ChatOpenAI(**merged_conf)
def get_llm_by_type(
llm_type: LLMType,
) -> ChatOpenAI:
) -> BaseChatModel:
"""
Get LLM instance by type. Returns cached instance if available.
"""
if llm_type in _llm_cache:
return _llm_cache[llm_type]
conf = load_yaml_config(
str((Path(__file__).parent.parent.parent / "conf.yaml").resolve())
)
conf = load_yaml_config(_get_config_file_path())
llm = _create_llm_use_conf(llm_type, conf)
_llm_cache[llm_type] = llm
return llm
def get_configured_llm_models() -> dict[str, list[str]]:
"""
Get all configured LLM models grouped by type.
Returns:
Dictionary mapping LLM type to list of configured model names.
"""
try:
conf = load_yaml_config(_get_config_file_path())
llm_type_config_keys = _get_llm_type_config_keys()
configured_models: dict[str, list[str]] = {}
for llm_type in get_args(LLMType):
# Get configuration from YAML file
config_key = llm_type_config_keys.get(llm_type, "")
yaml_conf = conf.get(config_key, {}) if config_key else {}
# Get configuration from environment variables
env_conf = _get_env_llm_conf(llm_type)
# Merge configurations, with environment variables taking precedence
merged_conf = {**yaml_conf, **env_conf}
# Check if model is configured
model_name = merged_conf.get("model")
if model_name:
configured_models.setdefault(llm_type, []).append(model_name)
return configured_models
except Exception as e:
# Log error and return empty dict to avoid breaking the application
print(f"Warning: Failed to load LLM configuration: {e}")
return {}
# In the future, we will use reasoning_llm and vl_llm for different purposes
# reasoning_llm = get_llm_by_type("reasoning")
# vl_llm = get_llm_by_type("vision")
-1
View File
@@ -1,7 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from typing import Optional
from langgraph.graph import MessagesState
+33 -17
View File
@@ -2,12 +2,13 @@
# SPDX-License-Identifier: MIT
import logging
import re
from langchain.schema import HumanMessage, SystemMessage
from langchain.schema import HumanMessage
from src.config.agents import AGENT_LLM_MAP
from src.llms.llm import get_llm_by_type
from src.prompts.template import env, apply_prompt_template
from src.prompts.template import apply_prompt_template
from src.prompt_enhancer.graph.state import PromptEnhancerState
logger = logging.getLogger(__name__)
@@ -41,23 +42,38 @@ def prompt_enhancer_node(state: PromptEnhancerState):
# Get the response from the model
response = model.invoke(messages)
# Clean up the response - remove any extra formatting or comments
enhanced_prompt = response.content.strip()
# Extract content from response
response_content = response.content.strip()
logger.debug(f"Response content: {response_content}")
# Remove common prefixes that might be added by the model
prefixes_to_remove = [
"Enhanced Prompt:",
"Enhanced prompt:",
"Here's the enhanced prompt:",
"Here is the enhanced prompt:",
"**Enhanced Prompt**:",
"**Enhanced prompt**:",
]
# Try to extract content from XML tags first
xml_match = re.search(
r"<enhanced_prompt>(.*?)</enhanced_prompt>", response_content, re.DOTALL
)
for prefix in prefixes_to_remove:
if enhanced_prompt.startswith(prefix):
enhanced_prompt = enhanced_prompt[len(prefix) :].strip()
break
if xml_match:
# Extract content from XML tags and clean it up
enhanced_prompt = xml_match.group(1).strip()
logger.debug("Successfully extracted enhanced prompt from XML tags")
else:
# Fallback to original logic if no XML tags found
enhanced_prompt = response_content
logger.warning("No XML tags found in response, using fallback parsing")
# Remove common prefixes that might be added by the model
prefixes_to_remove = [
"Enhanced Prompt:",
"Enhanced prompt:",
"Here's the enhanced prompt:",
"Here is the enhanced prompt:",
"**Enhanced Prompt**:",
"**Enhanced prompt**:",
]
for prefix in prefixes_to_remove:
if enhanced_prompt.startswith(prefix):
enhanced_prompt = enhanced_prompt[len(prefix) :].strip()
break
logger.info("Prompt enhancement completed successfully")
logger.debug(f"Enhanced prompt: {enhanced_prompt}")
+45 -14
View File
@@ -52,53 +52,84 @@ You are an expert prompt engineer. Your task is to enhance user prompts to make
{% endif %}
# Output Requirements
- Output ONLY the enhanced prompt
- Do NOT include any explanations, comments, or meta-text
- Do NOT use phrases like "Enhanced Prompt:" or "Here's the enhanced version:"
- The output should be ready to use directly as a prompt
- You may include thoughts or reasoning before your final answer
- Wrap the final enhanced prompt in XML tags: <enhanced_prompt></enhanced_prompt>
- Do NOT include any explanations, comments, or meta-text within the XML tags
- Do NOT use phrases like "Enhanced Prompt:" or "Here's the enhanced version:" within the XML tags
- The content within the XML tags should be ready to use directly as a prompt
{% if report_style == "academic" %}
# Academic Style Examples
**Original**: "Write about AI"
**Enhanced**: "Conduct a comprehensive academic analysis of artificial intelligence applications across three key sectors: healthcare, education, and business. Employ a systematic literature review methodology to examine peer-reviewed sources from the past five years. Structure your analysis with: (1) theoretical framework defining AI and its taxonomies, (2) sector-specific case studies with quantitative performance metrics, (3) critical evaluation of implementation challenges and ethical considerations, (4) comparative analysis across sectors, and (5) evidence-based recommendations for future research directions. Maintain academic rigor with proper citations, acknowledge methodological limitations, and present findings with appropriate hedging language. Target length: 3000-4000 words with APA formatting."
**Enhanced**:
<enhanced_prompt>
Conduct a comprehensive academic analysis of artificial intelligence applications across three key sectors: healthcare, education, and business. Employ a systematic literature review methodology to examine peer-reviewed sources from the past five years. Structure your analysis with: (1) theoretical framework defining AI and its taxonomies, (2) sector-specific case studies with quantitative performance metrics, (3) critical evaluation of implementation challenges and ethical considerations, (4) comparative analysis across sectors, and (5) evidence-based recommendations for future research directions. Maintain academic rigor with proper citations, acknowledge methodological limitations, and present findings with appropriate hedging language. Target length: 3000-4000 words with APA formatting.
</enhanced_prompt>
**Original**: "Explain climate change"
**Enhanced**: "Provide a rigorous academic examination of anthropogenic climate change, synthesizing current scientific consensus and recent research developments. Structure your analysis as follows: (1) theoretical foundations of greenhouse effect and radiative forcing mechanisms, (2) systematic review of empirical evidence from paleoclimatic, observational, and modeling studies, (3) critical analysis of attribution studies linking human activities to observed warming, (4) evaluation of climate sensitivity estimates and uncertainty ranges, (5) assessment of projected impacts under different emission scenarios, and (6) discussion of research gaps and methodological limitations. Include quantitative data, statistical significance levels, and confidence intervals where appropriate. Cite peer-reviewed sources extensively and maintain objective, third-person academic voice throughout."
**Enhanced**:
<enhanced_prompt>
Provide a rigorous academic examination of anthropogenic climate change, synthesizing current scientific consensus and recent research developments. Structure your analysis as follows: (1) theoretical foundations of greenhouse effect and radiative forcing mechanisms, (2) systematic review of empirical evidence from paleoclimatic, observational, and modeling studies, (3) critical analysis of attribution studies linking human activities to observed warming, (4) evaluation of climate sensitivity estimates and uncertainty ranges, (5) assessment of projected impacts under different emission scenarios, and (6) discussion of research gaps and methodological limitations. Include quantitative data, statistical significance levels, and confidence intervals where appropriate. Cite peer-reviewed sources extensively and maintain objective, third-person academic voice throughout.
</enhanced_prompt>
{% elif report_style == "popular_science" %}
# Popular Science Style Examples
**Original**: "Write about AI"
**Enhanced**: "Tell the fascinating story of how artificial intelligence is quietly revolutionizing our daily lives in ways most people never realize. Take readers on an engaging journey through three surprising realms: the hospital where AI helps doctors spot diseases faster than ever before, the classroom where intelligent tutors adapt to each student's learning style, and the boardroom where algorithms are making million-dollar decisions. Use vivid analogies (like comparing neural networks to how our brains work) and real-world examples that readers can relate to. Include 'wow factor' moments that showcase AI's incredible capabilities, but also honest discussions about current limitations. Write with infectious enthusiasm while maintaining scientific accuracy, and conclude with exciting possibilities that await us in the near future. Aim for 1500-2000 words that feel like a captivating conversation with a brilliant friend."
**Enhanced**:
<enhanced_prompt>
Tell the fascinating story of how artificial intelligence is quietly revolutionizing our daily lives in ways most people never realize. Take readers on an engaging journey through three surprising realms: the hospital where AI helps doctors spot diseases faster than ever before, the classroom where intelligent tutors adapt to each student's learning style, and the boardroom where algorithms are making million-dollar decisions. Use vivid analogies (like comparing neural networks to how our brains work) and real-world examples that readers can relate to. Include 'wow factor' moments that showcase AI's incredible capabilities, but also honest discussions about current limitations. Write with infectious enthusiasm while maintaining scientific accuracy, and conclude with exciting possibilities that await us in the near future. Aim for 1500-2000 words that feel like a captivating conversation with a brilliant friend.
</enhanced_prompt>
**Original**: "Explain climate change"
**Enhanced**: "Craft a compelling narrative that transforms the complex science of climate change into an accessible and engaging story for curious readers. Begin with a relatable scenario (like why your hometown weather feels different than when you were a kid) and use this as a gateway to explore the fascinating science behind our changing planet. Employ vivid analogies - compare Earth's atmosphere to a blanket, greenhouse gases to invisible heat-trapping molecules, and climate feedback loops to a snowball rolling downhill. Include surprising facts and 'aha moments' that will make readers think differently about the world around them. Weave in human stories of scientists making discoveries, communities adapting to change, and innovative solutions being developed. Balance the serious implications with hope and actionable insights, concluding with empowering steps readers can take. Write with wonder and curiosity, making complex concepts feel approachable and personally relevant."
**Enhanced**:
<enhanced_prompt>
Craft a compelling narrative that transforms the complex science of climate change into an accessible and engaging story for curious readers. Begin with a relatable scenario (like why your hometown weather feels different than when you were a kid) and use this as a gateway to explore the fascinating science behind our changing planet. Employ vivid analogies - compare Earth's atmosphere to a blanket, greenhouse gases to invisible heat-trapping molecules, and climate feedback loops to a snowball rolling downhill. Include surprising facts and 'aha moments' that will make readers think differently about the world around them. Weave in human stories of scientists making discoveries, communities adapting to change, and innovative solutions being developed. Balance the serious implications with hope and actionable insights, concluding with empowering steps readers can take. Write with wonder and curiosity, making complex concepts feel approachable and personally relevant.
</enhanced_prompt>
{% elif report_style == "news" %}
# News Style Examples
**Original**: "Write about AI"
**Enhanced**: "Report on the current state and immediate impact of artificial intelligence across three critical sectors: healthcare, education, and business. Lead with the most newsworthy developments and recent breakthroughs that are affecting people today. Structure using inverted pyramid format: start with key findings and immediate implications, then provide essential background context, followed by detailed analysis and expert perspectives. Include specific, verifiable data points, recent statistics, and quotes from credible sources including industry leaders, researchers, and affected stakeholders. Address both benefits and concerns with balanced reporting, fact-check all claims, and provide proper attribution for all information. Focus on timeliness and relevance to current events, highlighting what's happening now and what readers need to know. Maintain journalistic objectivity while making the significance clear to a general news audience. Target 800-1200 words following AP style guidelines."
**Enhanced**:
<enhanced_prompt>
Report on the current state and immediate impact of artificial intelligence across three critical sectors: healthcare, education, and business. Lead with the most newsworthy developments and recent breakthroughs that are affecting people today. Structure using inverted pyramid format: start with key findings and immediate implications, then provide essential background context, followed by detailed analysis and expert perspectives. Include specific, verifiable data points, recent statistics, and quotes from credible sources including industry leaders, researchers, and affected stakeholders. Address both benefits and concerns with balanced reporting, fact-check all claims, and provide proper attribution for all information. Focus on timeliness and relevance to current events, highlighting what's happening now and what readers need to know. Maintain journalistic objectivity while making the significance clear to a general news audience. Target 800-1200 words following AP style guidelines.
</enhanced_prompt>
**Original**: "Explain climate change"
**Enhanced**: "Provide comprehensive news coverage of climate change that explains the current scientific understanding and immediate implications for readers. Lead with the most recent and significant developments in climate science, policy, or impacts that are making headlines today. Structure the report with: breaking developments first, essential background for understanding the issue, current scientific consensus with specific data and timeframes, real-world impacts already being observed, policy responses and debates, and what experts say comes next. Include quotes from credible climate scientists, policy makers, and affected communities. Present information objectively while clearly communicating the scientific consensus, fact-check all claims, and provide proper source attribution. Address common misconceptions with factual corrections. Focus on what's happening now, why it matters to readers, and what they can expect in the near future. Follow journalistic standards for accuracy, balance, and timeliness."
**Enhanced**:
<enhanced_prompt>
Provide comprehensive news coverage of climate change that explains the current scientific understanding and immediate implications for readers. Lead with the most recent and significant developments in climate science, policy, or impacts that are making headlines today. Structure the report with: breaking developments first, essential background for understanding the issue, current scientific consensus with specific data and timeframes, real-world impacts already being observed, policy responses and debates, and what experts say comes next. Include quotes from credible climate scientists, policy makers, and affected communities. Present information objectively while clearly communicating the scientific consensus, fact-check all claims, and provide proper source attribution. Address common misconceptions with factual corrections. Focus on what's happening now, why it matters to readers, and what they can expect in the near future. Follow journalistic standards for accuracy, balance, and timeliness.
</enhanced_prompt>
{% elif report_style == "social_media" %}
# Social Media Style Examples
**Original**: "Write about AI"
**Enhanced**: "Create engaging social media content about AI that will stop the scroll and spark conversations! Start with an attention-grabbing hook like 'You won't believe what AI just did in hospitals this week 🤯' and structure as a compelling thread or post series. Include surprising facts, relatable examples (like AI helping doctors spot diseases or personalizing your Netflix recommendations), and interactive elements that encourage sharing and comments. Use strategic hashtags (#AI #Technology #Future), incorporate relevant emojis for visual appeal, and include questions that prompt audience engagement ('Have you noticed AI in your daily life? Drop examples below! 👇'). Make complex concepts digestible with bite-sized explanations, trending analogies, and shareable quotes. Include a clear call-to-action and optimize for the specific platform (Twitter threads, Instagram carousel, LinkedIn professional insights, or TikTok-style quick facts). Aim for high shareability with content that feels both informative and entertaining."
**Enhanced**:
<enhanced_prompt>
Create engaging social media content about AI that will stop the scroll and spark conversations! Start with an attention-grabbing hook like 'You won't believe what AI just did in hospitals this week 🤯' and structure as a compelling thread or post series. Include surprising facts, relatable examples (like AI helping doctors spot diseases or personalizing your Netflix recommendations), and interactive elements that encourage sharing and comments. Use strategic hashtags (#AI #Technology #Future), incorporate relevant emojis for visual appeal, and include questions that prompt audience engagement ('Have you noticed AI in your daily life? Drop examples below! 👇'). Make complex concepts digestible with bite-sized explanations, trending analogies, and shareable quotes. Include a clear call-to-action and optimize for the specific platform (Twitter threads, Instagram carousel, LinkedIn professional insights, or TikTok-style quick facts). Aim for high shareability with content that feels both informative and entertaining.
</enhanced_prompt>
**Original**: "Explain climate change"
**Enhanced**: "Develop viral-worthy social media content that makes climate change accessible and shareable without being preachy. Open with a scroll-stopping hook like 'The weather app on your phone is telling a bigger story than you think 📱🌡️' and break down complex science into digestible, engaging chunks. Use relatable comparisons (Earth's fever, atmosphere as a blanket), trending formats (before/after visuals, myth-busting series, quick facts), and interactive elements (polls, questions, challenges). Include strategic hashtags (#ClimateChange #Science #Environment), eye-catching emojis, and shareable graphics or infographics. Address common questions and misconceptions with clear, factual responses. Create content that encourages positive action rather than climate anxiety, ending with empowering steps followers can take. Optimize for platform-specific features (Instagram Stories, TikTok trends, Twitter threads) and include calls-to-action that drive engagement and sharing."
**Enhanced**:
<enhanced_prompt>
Develop viral-worthy social media content that makes climate change accessible and shareable without being preachy. Open with a scroll-stopping hook like 'The weather app on your phone is telling a bigger story than you think 📱🌡️' and break down complex science into digestible, engaging chunks. Use relatable comparisons (Earth's fever, atmosphere as a blanket), trending formats (before/after visuals, myth-busting series, quick facts), and interactive elements (polls, questions, challenges). Include strategic hashtags (#ClimateChange #Science #Environment), eye-catching emojis, and shareable graphics or infographics. Address common questions and misconceptions with clear, factual responses. Create content that encourages positive action rather than climate anxiety, ending with empowering steps followers can take. Optimize for platform-specific features (Instagram Stories, TikTok trends, Twitter threads) and include calls-to-action that drive engagement and sharing.
</enhanced_prompt>
{% else %}
# General Examples
**Original**: "Write about AI"
**Enhanced**: "Write a comprehensive 1000-word analysis of artificial intelligence's current applications in healthcare, education, and business. Include specific examples of AI tools being used in each sector, discuss both benefits and challenges, and provide insights into future trends. Structure the response with clear sections for each industry and conclude with key takeaways."
**Enhanced**:
<enhanced_prompt>
Write a comprehensive 1000-word analysis of artificial intelligence's current applications in healthcare, education, and business. Include specific examples of AI tools being used in each sector, discuss both benefits and challenges, and provide insights into future trends. Structure the response with clear sections for each industry and conclude with key takeaways.
</enhanced_prompt>
**Original**: "Explain climate change"
**Enhanced**: "Provide a detailed explanation of climate change suitable for a general audience. Cover the scientific mechanisms behind global warming, major causes including greenhouse gas emissions, observable effects we're seeing today, and projected future impacts. Include specific data and examples, and explain the difference between weather and climate. Organize the response with clear headings and conclude with actionable steps individuals can take."
**Enhanced**:
<enhanced_prompt>
Provide a detailed explanation of climate change suitable for a general audience. Cover the scientific mechanisms behind global warming, major causes including greenhouse gas emissions, observable effects we're seeing today, and projected future impacts. Include specific data and examples, and explain the difference between weather and climate. Organize the response with clear headings and conclude with actionable steps individuals can take.
</enhanced_prompt>
{% endif %}
+11 -2
View File
@@ -1,8 +1,17 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from .retriever import Retriever, Document, Resource
from .retriever import Retriever, Document, Resource, Chunk
from .ragflow import RAGFlowProvider
from .vikingdb_knowledge_base import VikingDBKnowledgeBaseProvider
from .builder import build_retriever
__all__ = [Retriever, Document, Resource, RAGFlowProvider, build_retriever]
__all__ = [
Retriever,
Document,
Resource,
RAGFlowProvider,
VikingDBKnowledgeBaseProvider,
Chunk,
build_retriever,
]
+3
View File
@@ -3,12 +3,15 @@
from src.config.tools import SELECTED_RAG_PROVIDER, RAGProvider
from src.rag.ragflow import RAGFlowProvider
from src.rag.vikingdb_knowledge_base import VikingDBKnowledgeBaseProvider
from src.rag.retriever import Retriever
def build_retriever() -> Retriever | None:
if SELECTED_RAG_PROVIDER == RAGProvider.RAGFLOW.value:
return RAGFlowProvider()
elif SELECTED_RAG_PROVIDER == RAGProvider.VIKINGDB_KNOWLEDGE_BASE.value:
return VikingDBKnowledgeBaseProvider()
elif SELECTED_RAG_PROVIDER:
raise ValueError(f"Unsupported RAG provider: {SELECTED_RAG_PROVIDER}")
return None
+208
View File
@@ -0,0 +1,208 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import requests
import json
from src.rag.retriever import Chunk, Document, Resource, Retriever
from urllib.parse import urlparse
from volcengine.auth.SignerV4 import SignerV4
from volcengine.base.Request import Request
from volcengine.Credentials import Credentials
class VikingDBKnowledgeBaseProvider(Retriever):
"""
VikingDBKnowledgeBaseProvider is a provider that uses VikingDB Knowledge base API to retrieve documents.
"""
api_url: str
api_ak: str
api_sk: str
retrieval_size: int = 10
def __init__(self):
api_url = os.getenv("VIKINGDB_KNOWLEDGE_BASE_API_URL")
if not api_url:
raise ValueError("VIKINGDB_KNOWLEDGE_BASE_API_URL is not set")
self.api_url = api_url
api_ak = os.getenv("VIKINGDB_KNOWLEDGE_BASE_API_AK")
if not api_ak:
raise ValueError("VIKINGDB_KNOWLEDGE_BASE_API_AK is not set")
self.api_ak = api_ak
api_sk = os.getenv("VIKINGDB_KNOWLEDGE_BASE_API_SK")
if not api_sk:
raise ValueError("VIKINGDB_KNOWLEDGE_BASE_API_SK is not set")
self.api_sk = api_sk
retrieval_size = os.getenv("VIKINGDB_KNOWLEDGE_BASE_RETRIEVAL_SIZE")
if retrieval_size:
self.retrieval_size = int(retrieval_size)
def prepare_request(self, method, path, params=None, data=None, doseq=0):
"""
Prepare signed request using volcengine auth
"""
if params:
for key in params:
if (
type(params[key]) is int
or type(params[key]) is float
or type(params[key]) is bool
):
params[key] = str(params[key])
elif type(params[key]) is list:
if not doseq:
params[key] = ",".join(params[key])
r = Request()
r.set_shema("https")
r.set_method(method)
r.set_connection_timeout(10)
r.set_socket_timeout(10)
mheaders = {
"Accept": "application/json",
"Content-Type": "application/json",
}
r.set_headers(mheaders)
if params:
r.set_query(params)
r.set_path(path)
if data is not None:
r.set_body(json.dumps(data))
credentials = Credentials(self.api_ak, self.api_sk, "air", "cn-north-1")
SignerV4.sign(r, credentials)
return r
def query_relevant_documents(
self, query: str, resources: list[Resource] = []
) -> list[Document]:
"""
Query relevant documents from the knowledge base
"""
if not resources:
return []
all_documents = {}
for resource in resources:
resource_id, document_id = parse_uri(resource.uri)
request_params = {
"resource_id": resource_id,
"query": query,
"limit": self.retrieval_size,
"dense_weight": 0.5,
"pre_processing": {
"need_instruction": True,
"rewrite": False,
"return_token_usage": True,
},
"post_processing": {
"rerank_switch": True,
"chunk_diffusion_count": 0,
"chunk_group": True,
"get_attachment_link": True,
},
}
if document_id:
doc_filter = {"op": "must", "field": "doc_id", "conds": [document_id]}
query_param = {"doc_filter": doc_filter}
request_params["query_param"] = query_param
method = "POST"
path = "/api/knowledge/collection/search_knowledge"
info_req = self.prepare_request(
method=method, path=path, data=request_params
)
rsp = requests.request(
method=info_req.method,
url="http://{}{}".format(self.api_url, info_req.path),
headers=info_req.headers,
data=info_req.body,
)
try:
response = json.loads(rsp.text)
except json.JSONDecodeError as e:
raise ValueError(f"Failed to parse JSON response: {e}")
if response["code"] != 0:
raise ValueError(
f"Failed to query documents from resource: {response['message']}"
)
rsp_data = response.get("data", {})
if "result_list" not in rsp_data:
continue
result_list = rsp_data["result_list"]
for item in result_list:
doc_info = item.get("doc_info", {})
doc_id = doc_info.get("doc_id")
if not doc_id:
continue
if doc_id not in all_documents:
all_documents[doc_id] = Document(
id=doc_id, title=doc_info.get("doc_name"), chunks=[]
)
chunk = Chunk(
content=item.get("content", ""), similarity=item.get("score", 0.0)
)
all_documents[doc_id].chunks.append(chunk)
return list(all_documents.values())
def list_resources(self, query: str | None = None) -> list[Resource]:
"""
List resources (knowledge bases) from the knowledge base service
"""
method = "POST"
path = "/api/knowledge/collection/list"
info_req = self.prepare_request(method=method, path=path)
rsp = requests.request(
method=info_req.method,
url="http://{}{}".format(self.api_url, info_req.path),
headers=info_req.headers,
data=info_req.body,
)
try:
response = json.loads(rsp.text)
except json.JSONDecodeError as e:
raise ValueError(f"Failed to parse JSON response: {e}")
if response["code"] != 0:
raise Exception(f"Failed to list resources: {response["message"]}")
resources = []
rsp_data = response.get("data", {})
collection_list = rsp_data.get("collection_list", [])
for item in collection_list:
collection_name = item.get("collection_name", "")
description = item.get("description", "")
if query and query.lower() not in collection_name.lower():
continue
resource_id = item.get("resource_id", "")
resource = Resource(
uri=f"rag://dataset/{resource_id}",
title=collection_name,
description=description,
)
resources.append(resource)
return resources
def parse_uri(uri: str) -> tuple[str, str]:
parsed = urlparse(uri)
if parsed.scheme != "rag":
raise ValueError(f"Invalid URI: {uri}")
return parsed.path.split("/")[1], parsed.fragment
+74 -27
View File
@@ -11,20 +11,20 @@ from uuid import uuid4
from fastapi import FastAPI, HTTPException, Query
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import Response, StreamingResponse
from langchain_core.messages import AIMessageChunk, ToolMessage, BaseMessage
from langchain_core.messages import AIMessageChunk, BaseMessage, ToolMessage
from langgraph.types import Command
from src.config.report_style import ReportStyle
from src.config.tools import SELECTED_RAG_PROVIDER
from src.graph.builder import build_graph_with_memory
from src.llms.llm import get_configured_llm_models
from src.podcast.graph.builder import build_graph as build_podcast_graph
from src.ppt.graph.builder import build_graph as build_ppt_graph
from src.prose.graph.builder import build_graph as build_prose_graph
from src.prompt_enhancer.graph.builder import build_graph as build_prompt_enhancer_graph
from src.prose.graph.builder import build_graph as build_prose_graph
from src.rag.builder import build_retriever
from src.rag.retriever import Resource
from src.server.chat_request import (
ChatMessage,
ChatRequest,
EnhancePromptRequest,
GeneratePodcastRequest,
@@ -32,6 +32,7 @@ from src.server.chat_request import (
GenerateProseRequest,
TTSRequest,
)
from src.server.config_request import ConfigResponse
from src.server.mcp_request import MCPServerMetadataRequest, MCPServerMetadataResponse
from src.server.mcp_utils import load_mcp_tools
from src.server.rag_request import (
@@ -52,12 +53,19 @@ app = FastAPI(
)
# Add CORS middleware
# It's recommended to load the allowed origins from an environment variable
# for better security and flexibility across different environments.
allowed_origins_str = os.getenv("ALLOWED_ORIGINS", "http://localhost:3000")
allowed_origins = [origin.strip() for origin in allowed_origins_str.split(",")]
logger.info(f"Allowed origins: {allowed_origins}")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Allows all origins
allow_origins=allowed_origins, # Restrict to specific origins
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
allow_methods=["GET", "POST", "OPTIONS"], # Use the configured list of methods
allow_headers=["*"], # Now allow all headers, but can be restricted further
)
graph = build_graph_with_memory()
@@ -65,6 +73,20 @@ graph = build_graph_with_memory()
@app.post("/api/chat/stream")
async def chat_stream(request: ChatRequest):
# Check if MCP server configuration is enabled
mcp_enabled = os.getenv("ENABLE_MCP_SERVER_CONFIGURATION", "false").lower() in [
"true",
"1",
"yes",
]
# Validate MCP settings if provided
if request.mcp_settings and not mcp_enabled:
raise HTTPException(
status_code=403,
detail="MCP server configuration is disabled. Set ENABLE_MCP_SERVER_CONFIGURATION=true to enable MCP features.",
)
thread_id = request.thread_id
if thread_id == "__default__":
thread_id = str(uuid4())
@@ -78,9 +100,10 @@ async def chat_stream(request: ChatRequest):
request.max_search_results,
request.auto_accepted_plan,
request.interrupt_feedback,
request.mcp_settings,
request.mcp_settings if mcp_enabled else {},
request.enable_background_investigation,
request.report_style,
request.enable_deep_thinking,
),
media_type="text/event-stream",
)
@@ -98,6 +121,7 @@ async def _astream_workflow_generator(
mcp_settings: dict,
enable_background_investigation: bool,
report_style: ReportStyle,
enable_deep_thinking: bool,
):
input_ = {
"messages": messages,
@@ -125,6 +149,7 @@ async def _astream_workflow_generator(
"max_search_results": max_search_results,
"mcp_settings": mcp_settings,
"report_style": report_style.value,
"enable_deep_thinking": enable_deep_thinking,
},
stream_mode=["messages", "updates"],
subgraphs=True,
@@ -149,13 +174,21 @@ async def _astream_workflow_generator(
message_chunk, message_metadata = cast(
tuple[BaseMessage, dict[str, any]], event_data
)
# Handle empty agent tuple gracefully
agent_name = "unknown"
if agent and len(agent) > 0:
agent_name = agent[0].split(":")[0] if ":" in agent[0] else agent[0]
event_stream_message: dict[str, any] = {
"thread_id": thread_id,
"agent": agent[0].split(":")[0],
"agent": agent_name,
"id": message_chunk.id,
"role": "assistant",
"content": message_chunk.content,
}
if message_chunk.additional_kwargs.get("reasoning_content"):
event_stream_message["reasoning_content"] = message_chunk.additional_kwargs[
"reasoning_content"
]
if message_chunk.response_metadata.get("finish_reason"):
event_stream_message["finish_reason"] = message_chunk.response_metadata.get(
"finish_reason"
@@ -193,17 +226,16 @@ def _make_event(event_type: str, data: dict[str, any]):
@app.post("/api/tts")
async def text_to_speech(request: TTSRequest):
"""Convert text to speech using volcengine TTS API."""
app_id = os.getenv("VOLCENGINE_TTS_APPID", "")
if not app_id:
raise HTTPException(status_code=400, detail="VOLCENGINE_TTS_APPID is not set")
access_token = os.getenv("VOLCENGINE_TTS_ACCESS_TOKEN", "")
if not access_token:
raise HTTPException(
status_code=400, detail="VOLCENGINE_TTS_ACCESS_TOKEN is not set"
)
try:
app_id = os.getenv("VOLCENGINE_TTS_APPID", "")
if not app_id:
raise HTTPException(
status_code=400, detail="VOLCENGINE_TTS_APPID is not set"
)
access_token = os.getenv("VOLCENGINE_TTS_ACCESS_TOKEN", "")
if not access_token:
raise HTTPException(
status_code=400, detail="VOLCENGINE_TTS_ACCESS_TOKEN is not set"
)
cluster = os.getenv("VOLCENGINE_TTS_CLUSTER", "volcano_tts")
voice_type = os.getenv("VOLCENGINE_TTS_VOICE_TYPE", "BV700_V2_streaming")
@@ -241,6 +273,7 @@ async def text_to_speech(request: TTSRequest):
)
},
)
except Exception as e:
logger.exception(f"Error in TTS endpoint: {str(e)}")
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@@ -319,13 +352,9 @@ async def enhance_prompt(request: EnhancePromptRequest):
"POPULAR_SCIENCE": ReportStyle.POPULAR_SCIENCE,
"NEWS": ReportStyle.NEWS,
"SOCIAL_MEDIA": ReportStyle.SOCIAL_MEDIA,
"academic": ReportStyle.ACADEMIC,
"popular_science": ReportStyle.POPULAR_SCIENCE,
"news": ReportStyle.NEWS,
"social_media": ReportStyle.SOCIAL_MEDIA,
}
report_style = style_mapping.get(
request.report_style, ReportStyle.ACADEMIC
request.report_style.upper(), ReportStyle.ACADEMIC
)
except Exception:
# If invalid style, default to ACADEMIC
@@ -350,6 +379,17 @@ async def enhance_prompt(request: EnhancePromptRequest):
@app.post("/api/mcp/server/metadata", response_model=MCPServerMetadataResponse)
async def mcp_server_metadata(request: MCPServerMetadataRequest):
"""Get information about an MCP server."""
# Check if MCP server configuration is enabled
if os.getenv("ENABLE_MCP_SERVER_CONFIGURATION", "false").lower() not in [
"true",
"1",
"yes",
]:
raise HTTPException(
status_code=403,
detail="MCP server configuration is disabled. Set ENABLE_MCP_SERVER_CONFIGURATION=true to enable MCP features.",
)
try:
# Set default timeout with a longer value for this endpoint
timeout = 300 # Default to 300 seconds for this endpoint
@@ -380,10 +420,8 @@ async def mcp_server_metadata(request: MCPServerMetadataRequest):
return response
except Exception as e:
if not isinstance(e, HTTPException):
logger.exception(f"Error in MCP server metadata endpoint: {str(e)}")
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
raise
logger.exception(f"Error in MCP server metadata endpoint: {str(e)}")
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@app.get("/api/rag/config", response_model=RAGConfigResponse)
@@ -399,3 +437,12 @@ async def rag_resources(request: Annotated[RAGResourceRequest, Query()]):
if retriever:
return RAGResourcesResponse(resources=retriever.list_resources(request.query))
return RAGResourcesResponse(resources=[])
@app.get("/api/config", response_model=ConfigResponse)
async def config():
"""Get the config of the server."""
return ConfigResponse(
rag=RAGConfigResponse(provider=SELECTED_RAG_PROVIDER),
models=get_configured_llm_models(),
)
+3
View File
@@ -62,6 +62,9 @@ class ChatRequest(BaseModel):
report_style: Optional[ReportStyle] = Field(
ReportStyle.ACADEMIC, description="The style of the report"
)
enable_deep_thinking: Optional[bool] = Field(
False, description="Whether to enable deep thinking"
)
class TTSRequest(BaseModel):
+13
View File
@@ -0,0 +1,13 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from pydantic import BaseModel, Field
from src.server.rag_request import RAGConfigResponse
class ConfigResponse(BaseModel):
"""Response model for server config."""
rag: RAGConfigResponse = Field(..., description="The config of the RAG")
models: dict[str, list[str]] = Field(..., description="The configured models")
+1 -1
View File
@@ -3,7 +3,7 @@
import logging
from datetime import timedelta
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, Dict, List, Optional
from fastapi import HTTPException
from mcp import ClientSession, StdioServerParameters
-2
View File
@@ -1,8 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
from .crawl import crawl_tool
from .python_repl import python_repl_tool
from .retriever import get_retriever_tool
-15
View File
@@ -60,18 +60,3 @@ def get_retriever_tool(resources: List[Resource]) -> RetrieverTool | None:
if not retriever:
return None
return RetrieverTool(retriever=retriever, resources=resources)
if __name__ == "__main__":
resources = [
Resource(
uri="rag://dataset/1c7e2ea4362911f09a41c290d4b6a7f0",
title="西游记",
description="西游记是中国古代四大名著之一,讲述了唐僧师徒四人西天取经的故事。",
)
]
retriever_tool = get_retriever_tool(resources)
print(retriever_tool.name)
print(retriever_tool.description)
print(retriever_tool.args)
print(retriever_tool.invoke("三打白骨精"))
+24 -13
View File
@@ -1,15 +1,16 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import json
import logging
import os
from typing import List, Optional
from langchain_community.tools import BraveSearch, DuckDuckGoSearchResults
from langchain_community.tools.arxiv import ArxivQueryRun
from langchain_community.utilities import ArxivAPIWrapper, BraveSearchWrapper
from src.config import SearchEngine, SELECTED_SEARCH_ENGINE
from src.config import load_yaml_config
from src.tools.tavily_search.tavily_search_results_with_images import (
TavilySearchResultsWithImages,
)
@@ -25,18 +26,39 @@ LoggedBraveSearch = create_logged_tool(BraveSearch)
LoggedArxivSearch = create_logged_tool(ArxivQueryRun)
def get_search_config():
config = load_yaml_config("conf.yaml")
search_config = config.get("SEARCH_ENGINE", {})
return search_config
# Get the selected search tool
def get_web_search_tool(max_search_results: int):
search_config = get_search_config()
if SELECTED_SEARCH_ENGINE == SearchEngine.TAVILY.value:
# Only get and apply include/exclude domains for Tavily
include_domains: Optional[List[str]] = search_config.get("include_domains", [])
exclude_domains: Optional[List[str]] = search_config.get("exclude_domains", [])
logger.info(
f"Tavily search configuration loaded: include_domains={include_domains}, exclude_domains={exclude_domains}"
)
return LoggedTavilySearch(
name="web_search",
max_results=max_search_results,
include_raw_content=True,
include_images=True,
include_image_descriptions=True,
include_domains=include_domains,
exclude_domains=exclude_domains,
)
elif SELECTED_SEARCH_ENGINE == SearchEngine.DUCKDUCKGO.value:
return LoggedDuckDuckGoSearch(name="web_search", max_results=max_search_results)
return LoggedDuckDuckGoSearch(
name="web_search",
num_results=max_search_results,
)
elif SELECTED_SEARCH_ENGINE == SearchEngine.BRAVE_SEARCH.value:
return LoggedBraveSearch(
name="web_search",
@@ -56,14 +78,3 @@ def get_web_search_tool(max_search_results: int):
)
else:
raise ValueError(f"Unsupported search engine: {SELECTED_SEARCH_ENGINE}")
if __name__ == "__main__":
results = LoggedDuckDuckGoSearch(
name="web_search", max_results=3, output_format="list"
)
print(results.name)
print(results.description)
print(results.args)
# .invoke("cute panda")
# print(json.dumps(results, indent=2, ensure_ascii=False))
@@ -1,3 +1,7 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import json
from typing import Dict, List, Optional
@@ -107,9 +111,3 @@ class EnhancedTavilySearchAPIWrapper(OriginalTavilySearchAPIWrapper):
}
clean_results.append(clean_result)
return clean_results
if __name__ == "__main__":
wrapper = EnhancedTavilySearchAPIWrapper()
results = wrapper.raw_results("cute panda", include_images=True)
print(json.dumps(results, indent=2, ensure_ascii=False))
@@ -1,3 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import json
from typing import Dict, List, Optional, Tuple, Union
+1 -1
View File
@@ -129,4 +129,4 @@ class VolcengineTTS:
except Exception as e:
logger.exception(f"Error in TTS API call: {str(e)}")
return {"success": False, "error": str(e), "audio_data": None}
return {"success": False, "error": "TTS API call error", "audio_data": None}
+11 -15
View File
@@ -19,21 +19,17 @@ def repair_json_output(content: str) -> str:
str: Repaired JSON string, or original content if not JSON
"""
content = content.strip()
if content.startswith(("{", "[")) or "```json" in content or "```ts" in content:
try:
# If content is wrapped in ```json code block, extract the JSON part
if content.startswith("```json"):
content = content.removeprefix("```json")
if content.startswith("```ts"):
content = content.removeprefix("```ts")
try:
# Try to repair and parse JSON
repaired_content = json_repair.loads(content)
if not isinstance(repaired_content, dict) and not isinstance(
repaired_content, list
):
logger.warning("Repaired content is not a valid JSON object or array.")
return content
content = json.dumps(repaired_content, ensure_ascii=False)
except Exception as e:
logger.warning(f"JSON repair failed: {e}")
if content.endswith("```"):
content = content.removesuffix("```")
# Try to repair and parse JSON
repaired_content = json_repair.loads(content)
return json.dumps(repaired_content, ensure_ascii=False)
except Exception as e:
logger.warning(f"JSON repair failed: {e}")
return content
-1
View File
@@ -1,7 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import asyncio
import logging
from src.graph import build_graph
-1
View File
@@ -1,7 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.crawler import Crawler
File diff suppressed because it is too large Load Diff
@@ -1,7 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.tools.python_repl import python_repl_tool
-2
View File
@@ -101,8 +101,6 @@ def test_current_time_format():
messages = apply_prompt_template("coder", test_state)
system_content = messages[0]["content"]
# Time format should be like: Mon Jan 01 2024 12:34:56 +0000
time_format = r"\w{3} \w{3} \d{2} \d{4} \d{2}:\d{2}:\d{2}"
assert any(
line.strip().startswith("CURRENT_TIME:") for line in system_content.split("\n")
)
+18 -2
View File
@@ -2,9 +2,7 @@
# SPDX-License-Identifier: MIT
import json
import pytest
from unittest.mock import patch, MagicMock
import uuid
import base64
from src.tools.tts import VolcengineTTS
@@ -229,3 +227,21 @@ class TestVolcengineTTS:
args, kwargs = mock_post.call_args
request_json = json.loads(args[1])
assert request_json["user"]["uid"] == str(mock_uuid_value)
@patch("src.tools.tts.requests.post")
def test_text_to_speech_request_exception(self, mock_post):
"""Test error handling when requests.post raises an exception."""
# Mock requests.post to raise an exception
mock_post.side_effect = Exception("Network error")
# Create TTS client
tts = VolcengineTTS(
appid="test_appid",
access_token="test_token",
)
# Call the method
result = tts.text_to_speech("Hello, world!")
# Verify the result
assert result["success"] is False
# The TTS error is caught and returned as a string
assert result["error"] == "TTS API call error"
assert result["audio_data"] is None
+1 -2
View File
@@ -1,10 +1,9 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
import sys
import os
from typing import Annotated, List, Optional
from typing import Annotated
# Import MessagesState directly from langgraph rather than through our application
from langgraph.graph import MessagesState
-5
View File
@@ -1,13 +1,8 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import pytest
import sys
import types
from pathlib import Path
import builtins
import importlib
from src.config.configuration import Configuration
# Patch sys.path so relative import works
-2
View File
@@ -3,8 +3,6 @@
import os
import tempfile
import yaml
import pytest
from src.config.loader import load_yaml_config, process_dict, replace_env_vars
-1
View File
@@ -1,6 +1,5 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.crawler.article import Article
-2
View File
@@ -1,9 +1,7 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
import src.crawler as crawler_module
from src.crawler import Crawler
def test_crawler_sets_article_url(monkeypatch):
+132
View File
@@ -0,0 +1,132 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from unittest.mock import MagicMock, patch
import importlib
import sys
import src.graph.builder as builder_mod
@pytest.fixture
def mock_state():
class Step:
def __init__(self, execution_res=None, step_type=None):
self.execution_res = execution_res
self.step_type = step_type
class Plan:
def __init__(self, steps):
self.steps = steps
return {
"Step": Step,
"Plan": Plan,
}
def test_continue_to_running_research_team_no_plan(mock_state):
state = {"current_plan": None}
assert builder_mod.continue_to_running_research_team(state) == "planner"
def test_continue_to_running_research_team_no_steps(mock_state):
state = {"current_plan": mock_state["Plan"](steps=[])}
assert builder_mod.continue_to_running_research_team(state) == "planner"
def test_continue_to_running_research_team_all_executed(mock_state):
Step = mock_state["Step"]
Plan = mock_state["Plan"]
steps = [Step(execution_res=True), Step(execution_res=True)]
state = {"current_plan": Plan(steps=steps)}
assert builder_mod.continue_to_running_research_team(state) == "planner"
def test_continue_to_running_research_team_next_researcher(mock_state):
Step = mock_state["Step"]
Plan = mock_state["Plan"]
steps = [
Step(execution_res=True),
Step(execution_res=None, step_type=builder_mod.StepType.RESEARCH),
]
state = {"current_plan": Plan(steps=steps)}
assert builder_mod.continue_to_running_research_team(state) == "researcher"
def test_continue_to_running_research_team_next_coder(mock_state):
Step = mock_state["Step"]
Plan = mock_state["Plan"]
steps = [
Step(execution_res=True),
Step(execution_res=None, step_type=builder_mod.StepType.PROCESSING),
]
state = {"current_plan": Plan(steps=steps)}
assert builder_mod.continue_to_running_research_team(state) == "coder"
def test_continue_to_running_research_team_next_coder_withresult(mock_state):
Step = mock_state["Step"]
Plan = mock_state["Plan"]
steps = [
Step(execution_res=True),
Step(execution_res=True, step_type=builder_mod.StepType.PROCESSING),
]
state = {"current_plan": Plan(steps=steps)}
assert builder_mod.continue_to_running_research_team(state) == "planner"
def test_continue_to_running_research_team_default_planner(mock_state):
Step = mock_state["Step"]
Plan = mock_state["Plan"]
steps = [Step(execution_res=True), Step(execution_res=None, step_type=None)]
state = {"current_plan": Plan(steps=steps)}
assert builder_mod.continue_to_running_research_team(state) == "planner"
@patch("src.graph.builder.StateGraph")
def test_build_base_graph_adds_nodes_and_edges(MockStateGraph):
mock_builder = MagicMock()
MockStateGraph.return_value = mock_builder
builder_mod._build_base_graph()
# Check that all nodes and edges are added
assert mock_builder.add_edge.call_count >= 2
assert mock_builder.add_node.call_count >= 8
mock_builder.add_conditional_edges.assert_called_once()
@patch("src.graph.builder._build_base_graph")
@patch("src.graph.builder.MemorySaver")
def test_build_graph_with_memory_uses_memory(MockMemorySaver, mock_build_base_graph):
mock_builder = MagicMock()
mock_build_base_graph.return_value = mock_builder
mock_memory = MagicMock()
MockMemorySaver.return_value = mock_memory
builder_mod.build_graph_with_memory()
mock_builder.compile.assert_called_once_with(checkpointer=mock_memory)
@patch("src.graph.builder._build_base_graph")
def test_build_graph_without_memory(mock_build_base_graph):
mock_builder = MagicMock()
mock_build_base_graph.return_value = mock_builder
builder_mod.build_graph()
mock_builder.compile.assert_called_once_with()
def test_graph_is_compiled():
# The graph object should be the result of build_graph()
with patch("src.graph.builder._build_base_graph") as mock_base:
mock_builder = MagicMock()
mock_base.return_value = mock_builder
mock_builder.compile.return_value = "compiled_graph"
# reload the module to re-run the graph assignment
importlib.reload(sys.modules["src.graph.builder"])
assert builder_mod.graph is not None
+19 -3
View File
@@ -1,8 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import types
import pytest
from src.llms import llm
@@ -30,6 +28,11 @@ def dummy_conf():
def test_get_env_llm_conf(monkeypatch):
# Clear any existing environment variables that might interfere
monkeypatch.delenv("BASIC_MODEL__API_KEY", raising=False)
monkeypatch.delenv("BASIC_MODEL__BASE_URL", raising=False)
monkeypatch.delenv("BASIC_MODEL__MODEL", raising=False)
monkeypatch.setenv("BASIC_MODEL__API_KEY", "env_key")
monkeypatch.setenv("BASIC_MODEL__BASE_URL", "http://env")
conf = llm._get_env_llm_conf("basic")
@@ -38,6 +41,9 @@ def test_get_env_llm_conf(monkeypatch):
def test_create_llm_use_conf_merges_env(monkeypatch, dummy_conf):
# Clear any existing environment variables that might interfere
monkeypatch.delenv("BASIC_MODEL__BASE_URL", raising=False)
monkeypatch.delenv("BASIC_MODEL__MODEL", raising=False)
monkeypatch.setenv("BASIC_MODEL__API_KEY", "env_key")
result = llm._create_llm_use_conf("basic", dummy_conf)
assert isinstance(result, DummyChatOpenAI)
@@ -45,12 +51,22 @@ def test_create_llm_use_conf_merges_env(monkeypatch, dummy_conf):
assert result.kwargs["base_url"] == "http://test"
def test_create_llm_use_conf_invalid_type(dummy_conf):
def test_create_llm_use_conf_invalid_type(monkeypatch, dummy_conf):
# Clear any existing environment variables that might interfere
monkeypatch.delenv("BASIC_MODEL__API_KEY", raising=False)
monkeypatch.delenv("BASIC_MODEL__BASE_URL", raising=False)
monkeypatch.delenv("BASIC_MODEL__MODEL", raising=False)
with pytest.raises(ValueError):
llm._create_llm_use_conf("unknown", dummy_conf)
def test_create_llm_use_conf_empty_conf(monkeypatch):
# Clear any existing environment variables that might interfere
monkeypatch.delenv("BASIC_MODEL__API_KEY", raising=False)
monkeypatch.delenv("BASIC_MODEL__BASE_URL", raising=False)
monkeypatch.delenv("BASIC_MODEL__MODEL", raising=False)
with pytest.raises(ValueError):
llm._create_llm_use_conf("basic", {})
@@ -6,7 +6,6 @@ from unittest.mock import patch, MagicMock
from src.prompt_enhancer.graph.builder import build_graph
from src.prompt_enhancer.graph.state import PromptEnhancerState
from src.config.report_style import ReportStyle
class TestBuildGraph:
@@ -48,7 +47,7 @@ class TestBuildGraph:
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
result = build_graph()
build_graph()
# Verify the correct node function was added
mock_builder.add_node.assert_called_once_with("enhancer", mock_enhancer_node)
@@ -14,7 +14,75 @@ from src.config.report_style import ReportStyle
def mock_llm():
"""Mock LLM that returns a test response."""
llm = MagicMock()
llm.invoke.return_value = MagicMock(content="Enhanced test prompt")
llm.invoke.return_value = MagicMock(
content="""Thoughts: LLM thinks a lot
<enhanced_prompt>
Enhanced test prompt
</enhanced_prompt>
"""
)
return llm
@pytest.fixture
def mock_llm_xml_with_whitespace():
"""Mock LLM that returns XML response with extra whitespace."""
llm = MagicMock()
llm.invoke.return_value = MagicMock(
content="""
Some thoughts here...
<enhanced_prompt>
Enhanced prompt with whitespace
</enhanced_prompt>
Additional content after XML
"""
)
return llm
@pytest.fixture
def mock_llm_xml_multiline():
"""Mock LLM that returns XML response with multiline content."""
llm = MagicMock()
llm.invoke.return_value = MagicMock(
content="""
<enhanced_prompt>
This is a multiline enhanced prompt
that spans multiple lines
and includes various formatting.
It should preserve the structure.
</enhanced_prompt>
"""
)
return llm
@pytest.fixture
def mock_llm_no_xml():
"""Mock LLM that returns response without XML tags."""
llm = MagicMock()
llm.invoke.return_value = MagicMock(
content="Enhanced Prompt: This is an enhanced prompt without XML tags"
)
return llm
@pytest.fixture
def mock_llm_malformed_xml():
"""Mock LLM that returns response with malformed XML."""
llm = MagicMock()
llm.invoke.return_value = MagicMock(
content="""
<enhanced_prompt>
This XML tag is not properly closed
<enhanced_prompt>
"""
)
return llm
@@ -217,3 +285,241 @@ class TestPromptEnhancerNode:
result = prompt_enhancer_node(state)
assert result == {"output": "Enhanced prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_xml_with_whitespace_handling(
self,
mock_get_llm,
mock_apply_template,
mock_llm_xml_with_whitespace,
mock_messages,
):
"""Test XML extraction with extra whitespace inside tags."""
mock_get_llm.return_value = mock_llm_xml_with_whitespace
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": "Enhanced prompt with whitespace"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_xml_multiline_content(
self, mock_get_llm, mock_apply_template, mock_llm_xml_multiline, mock_messages
):
"""Test XML extraction with multiline content."""
mock_get_llm.return_value = mock_llm_xml_multiline
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
expected_output = """This is a multiline enhanced prompt
that spans multiple lines
and includes various formatting.
It should preserve the structure."""
assert result == {"output": expected_output}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_fallback_to_prefix_removal(
self, mock_get_llm, mock_apply_template, mock_llm_no_xml, mock_messages
):
"""Test fallback to prefix removal when no XML tags are found."""
mock_get_llm.return_value = mock_llm_no_xml
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": "This is an enhanced prompt without XML tags"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_malformed_xml_fallback(
self, mock_get_llm, mock_apply_template, mock_llm_malformed_xml, mock_messages
):
"""Test handling of malformed XML tags."""
mock_get_llm.return_value = mock_llm_malformed_xml
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
# Should fall back to using the entire content since XML is malformed
expected_content = """<enhanced_prompt>
This XML tag is not properly closed
<enhanced_prompt>"""
assert result == {"output": expected_content}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_case_sensitive_prefix_removal(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test that prefix removal is case-sensitive."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
# Test case variations that should NOT be removed
test_cases = [
"ENHANCED PROMPT: This should not be removed",
"enhanced prompt: This should not be removed",
"Enhanced Prompt This should not be removed", # Missing colon
"Enhanced Prompt :: This should not be removed", # Double colon
]
for response_content in test_cases:
mock_llm.invoke.return_value = MagicMock(content=response_content)
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
# Should return the full content since prefix doesn't match exactly
assert result == {"output": response_content}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_prefix_with_extra_whitespace(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test prefix removal with extra whitespace after colon."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
test_cases = [
("Enhanced Prompt: This has extra spaces", "This has extra spaces"),
("Enhanced prompt:\t\tThis has tabs", "This has tabs"),
("Here's the enhanced prompt:\n\nThis has newlines", "This has newlines"),
]
for response_content, expected_output in test_cases:
mock_llm.invoke.return_value = MagicMock(content=response_content)
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": expected_output}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_xml_with_special_characters(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test XML extraction with special characters and symbols."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
special_content = """<enhanced_prompt>
Enhanced prompt with special chars: @#$%^&*()
Unicode: 🚀 💡
Quotes: "double" and 'single'
Backslashes: \\n \\t \\r
</enhanced_prompt>"""
mock_llm.invoke.return_value = MagicMock(content=special_content)
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
expected_output = """Enhanced prompt with special chars: @#$%^&*()
Unicode: 🚀 💡
Quotes: "double" and 'single'
Backslashes: \\n \\t \\r"""
assert result == {"output": expected_output}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_very_long_response(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test handling of very long LLM responses."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
# Create a very long response
long_content = "This is a very long enhanced prompt. " * 100
xml_response = f"<enhanced_prompt>\n{long_content}\n</enhanced_prompt>"
mock_llm.invoke.return_value = MagicMock(content=xml_response)
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": long_content.strip()}
assert len(result["output"]) > 1000 # Verify it's actually long
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_empty_response_content(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test handling of empty response content."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
mock_llm.invoke.return_value = MagicMock(content="")
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": ""}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_only_whitespace_response(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test handling of response with only whitespace."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
mock_llm.invoke.return_value = MagicMock(content=" \n\n\t\t ")
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": ""}
@@ -1,7 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.prompt_enhancer.graph.state import PromptEnhancerState
from src.config.report_style import ReportStyle
-26
View File
@@ -1,9 +1,7 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import pytest
import requests
from unittest.mock import patch, MagicMock
from src.rag.ragflow import RAGFlowProvider, parse_uri
@@ -144,30 +142,6 @@ def test_list_resources_success(mock_get, monkeypatch):
assert resources[1].description == "desc2"
@patch("src.rag.ragflow.requests.get")
def test_list_resources_success(mock_get, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
provider = RAGFlowProvider()
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
"data": [
{"id": "123", "name": "Dataset1", "description": "desc1"},
{"id": "456", "name": "Dataset2", "description": "desc2"},
]
}
mock_get.return_value = mock_response
resources = provider.list_resources()
assert len(resources) == 2
assert resources[0].uri == "rag://dataset/123"
assert resources[0].title == "Dataset1"
assert resources[0].description == "desc1"
assert resources[1].uri == "rag://dataset/456"
assert resources[1].title == "Dataset2"
assert resources[1].description == "desc2"
@patch("src.rag.ragflow.requests.get")
def test_list_resources_error(mock_get, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
@@ -0,0 +1,503 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import pytest
import json
from unittest.mock import patch, MagicMock
from src.rag.vikingdb_knowledge_base import VikingDBKnowledgeBaseProvider, parse_uri
# Dummy classes to mock dependencies
class MockResource:
def __init__(self, uri, title="", description=""):
self.uri = uri
self.title = title
self.description = description
class MockChunk:
def __init__(self, content, similarity):
self.content = content
self.similarity = similarity
class MockDocument:
def __init__(self, id, title, chunks=None):
self.id = id
self.title = title
self.chunks = chunks or []
# Patch the imports to use mock classes
@pytest.fixture(autouse=True)
def patch_imports():
with (
patch("src.rag.vikingdb_knowledge_base.Resource", MockResource),
patch("src.rag.vikingdb_knowledge_base.Chunk", MockChunk),
patch("src.rag.vikingdb_knowledge_base.Document", MockDocument),
):
yield
@pytest.fixture
def env_vars():
"""Fixture to set up environment variables"""
with patch.dict(
os.environ,
{
"VIKINGDB_KNOWLEDGE_BASE_API_URL": "api-test.example.com",
"VIKINGDB_KNOWLEDGE_BASE_API_AK": "test_ak",
"VIKINGDB_KNOWLEDGE_BASE_API_SK": "test_sk",
"VIKINGDB_KNOWLEDGE_BASE_RETRIEVAL_SIZE": "10",
},
):
yield
class TestParseUri:
def test_parse_uri_valid_with_fragment(self):
"""Test parsing valid URI with fragment"""
uri = "rag://dataset/123#doc456"
resource_id, document_id = parse_uri(uri)
assert resource_id == "123"
assert document_id == "doc456"
def test_parse_uri_valid_without_fragment(self):
"""Test parsing valid URI without fragment"""
uri = "rag://dataset/123"
resource_id, document_id = parse_uri(uri)
assert resource_id == "123"
assert document_id == ""
def test_parse_uri_invalid_scheme(self):
"""Test parsing URI with invalid scheme"""
with pytest.raises(ValueError, match="Invalid URI"):
parse_uri("http://dataset/123#abc")
def test_parse_uri_malformed(self):
"""Test parsing malformed URI"""
with pytest.raises(ValueError, match="Invalid URI"):
parse_uri("invalid_uri")
class TestVikingDBKnowledgeBaseProviderInit:
def test_init_success_with_all_env_vars(self, env_vars):
"""Test successful initialization with all environment variables"""
provider = VikingDBKnowledgeBaseProvider()
assert provider.api_url == "api-test.example.com"
assert provider.api_ak == "test_ak"
assert provider.api_sk == "test_sk"
assert provider.retrieval_size == 10
def test_init_success_without_retrieval_size(self):
"""Test initialization without VIKINGDB_KNOWLEDGE_BASE_RETRIEVAL_SIZE (should use default)"""
with patch.dict(
os.environ,
{
"VIKINGDB_KNOWLEDGE_BASE_API_URL": "api-test.example.com",
"VIKINGDB_KNOWLEDGE_BASE_API_AK": "test_ak",
"VIKINGDB_KNOWLEDGE_BASE_API_SK": "test_sk",
},
clear=True,
):
provider = VikingDBKnowledgeBaseProvider()
assert provider.retrieval_size == 10
def test_init_custom_retrieval_size(self):
"""Test initialization with custom retrieval size"""
with patch.dict(
os.environ,
{
"VIKINGDB_KNOWLEDGE_BASE_API_URL": "api-test.example.com",
"VIKINGDB_KNOWLEDGE_BASE_API_AK": "test_ak",
"VIKINGDB_KNOWLEDGE_BASE_API_SK": "test_sk",
"VIKINGDB_KNOWLEDGE_BASE_RETRIEVAL_SIZE": "5",
},
):
provider = VikingDBKnowledgeBaseProvider()
assert provider.retrieval_size == 5
def test_init_missing_api_url(self):
"""Test initialization fails when API URL is missing"""
with patch.dict(
os.environ,
{
"VIKINGDB_KNOWLEDGE_BASE_API_AK": "test_ak",
"VIKINGDB_KNOWLEDGE_BASE_API_SK": "test_sk",
},
clear=True,
):
with pytest.raises(
ValueError, match="VIKINGDB_KNOWLEDGE_BASE_API_URL is not set"
):
VikingDBKnowledgeBaseProvider()
def test_init_missing_api_ak(self):
"""Test initialization fails when API AK is missing"""
with patch.dict(
os.environ,
{
"VIKINGDB_KNOWLEDGE_BASE_API_URL": "api-test.example.com",
"VIKINGDB_KNOWLEDGE_BASE_API_SK": "test_sk",
},
clear=True,
):
with pytest.raises(
ValueError, match="VIKINGDB_KNOWLEDGE_BASE_API_AK is not set"
):
VikingDBKnowledgeBaseProvider()
def test_init_missing_api_sk(self):
"""Test initialization fails when API SK is missing"""
with patch.dict(
os.environ,
{
"VIKINGDB_KNOWLEDGE_BASE_API_URL": "api-test.example.com",
"VIKINGDB_KNOWLEDGE_BASE_API_AK": "test_ak",
},
clear=True,
):
with pytest.raises(
ValueError, match="VIKINGDB_KNOWLEDGE_BASE_API_SK is not set"
):
VikingDBKnowledgeBaseProvider()
class TestVikingDBKnowledgeBaseProviderPrepareRequest:
@pytest.fixture
def provider(self, env_vars):
return VikingDBKnowledgeBaseProvider()
def test_prepare_request_basic(self, provider):
"""Test basic request preparation"""
with (
patch("src.rag.vikingdb_knowledge_base.Request") as mock_request,
patch("src.rag.vikingdb_knowledge_base.Credentials") as _mock_credentials,
patch("src.rag.vikingdb_knowledge_base.SignerV4.sign") as _mock_sign,
):
mock_req_instance = MagicMock()
mock_request.return_value = mock_req_instance
result = provider.prepare_request("POST", "/test/path")
assert result == mock_req_instance
mock_req_instance.set_shema.assert_called_once_with("https")
mock_req_instance.set_method.assert_called_once_with("POST")
mock_req_instance.set_path.assert_called_once_with("/test/path")
def test_prepare_request_with_params(self, provider):
"""Test request preparation with parameters"""
with (
patch("src.rag.vikingdb_knowledge_base.Request") as mock_request,
patch("src.rag.vikingdb_knowledge_base.Credentials"),
patch("src.rag.vikingdb_knowledge_base.SignerV4.sign"),
):
mock_req_instance = MagicMock()
mock_request.return_value = mock_req_instance
params = {"key": "value", "number": 123, "boolean": True}
provider.prepare_request("GET", "/test", params=params)
expected_params = {"key": "value", "number": "123", "boolean": "True"}
mock_req_instance.set_query.assert_called_once_with(expected_params)
def test_prepare_request_with_data(self, provider):
"""Test request preparation with data"""
with (
patch("src.rag.vikingdb_knowledge_base.Request") as mock_request,
patch("src.rag.vikingdb_knowledge_base.Credentials"),
patch("src.rag.vikingdb_knowledge_base.SignerV4.sign"),
):
mock_req_instance = MagicMock()
mock_request.return_value = mock_req_instance
data = {"test": "data"}
provider.prepare_request("POST", "/test", data=data)
mock_req_instance.set_body.assert_called_once_with(json.dumps(data))
class TestVikingDBKnowledgeBaseProviderQueryRelevantDocuments:
@pytest.fixture
def provider(self, env_vars):
return VikingDBKnowledgeBaseProvider()
def test_query_relevant_documents_empty_resources(self, provider):
"""Test querying with empty resources list"""
result = provider.query_relevant_documents("test query", [])
assert result == []
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_query_relevant_documents_success(self, mock_request, provider):
"""Test successful document query"""
# Mock response
mock_response = MagicMock()
mock_response.text = json.dumps(
{
"code": 0,
"data": {
"result_list": [
{
"doc_info": {
"doc_id": "doc123",
"doc_name": "Test Document",
},
"content": "Test content",
"score": 0.95,
}
]
},
}
)
mock_request.return_value = mock_response
# Mock prepare_request
with patch.object(provider, "prepare_request") as mock_prepare:
mock_req = MagicMock()
mock_req.method = "POST"
mock_req.path = "/api/knowledge/collection/search_knowledge"
mock_req.headers = {}
mock_req.body = "{}"
mock_prepare.return_value = mock_req
resources = [MockResource("rag://dataset/123")]
result = provider.query_relevant_documents("test query", resources)
assert len(result) == 1
assert result[0].id == "doc123"
assert result[0].title == "Test Document"
assert len(result[0].chunks) == 1
assert result[0].chunks[0].content == "Test content"
assert result[0].chunks[0].similarity == 0.95
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_query_relevant_documents_with_document_filter(
self, mock_request, provider
):
"""Test document query with document ID filter"""
mock_response = MagicMock()
mock_response.text = json.dumps({"code": 0, "data": {"result_list": []}})
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request") as mock_prepare:
mock_req = MagicMock()
mock_prepare.return_value = mock_req
resources = [MockResource("rag://dataset/123#doc456")]
provider.query_relevant_documents("test query", resources)
# Verify that query_param with doc_filter was included in the request
call_args = mock_prepare.call_args
request_data = call_args[1]["data"]
assert "query_param" in request_data
assert "doc_filter" in request_data["query_param"]
doc_filter = request_data["query_param"]["doc_filter"]
assert doc_filter["op"] == "must"
assert doc_filter["field"] == "doc_id"
assert doc_filter["conds"] == ["doc456"]
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_query_relevant_documents_api_error(self, mock_request, provider):
"""Test handling of API error response"""
mock_response = MagicMock()
mock_response.text = json.dumps({"code": 1, "message": "API Error"})
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request"):
resources = [MockResource("rag://dataset/123")]
with pytest.raises(
ValueError, match="Failed to query documents from resource: API Error"
):
provider.query_relevant_documents("test query", resources)
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_query_relevant_documents_json_decode_error(self, mock_request, provider):
"""Test handling of JSON decode error"""
mock_response = MagicMock()
mock_response.text = "invalid json"
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request"):
resources = [MockResource("rag://dataset/123")]
with pytest.raises(ValueError, match="Failed to parse JSON response"):
provider.query_relevant_documents("test query", resources)
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_query_relevant_documents_multiple_resources(self, mock_request, provider):
"""Test querying multiple resources and merging results"""
# Mock responses for different resources
responses = [
json.dumps(
{
"code": 0,
"data": {
"result_list": [
{
"doc_info": {
"doc_id": "doc1",
"doc_name": "Document 1",
},
"content": "Content 1",
"score": 0.9,
}
]
},
}
),
json.dumps(
{
"code": 0,
"data": {
"result_list": [
{
"doc_info": {
"doc_id": "doc1",
"doc_name": "Document 1",
},
"content": "Content 2",
"score": 0.8,
},
{
"doc_info": {
"doc_id": "doc2",
"doc_name": "Document 2",
},
"content": "Content 3",
"score": 0.7,
},
]
},
}
),
]
mock_request.side_effect = [MagicMock(text=resp) for resp in responses]
with patch.object(provider, "prepare_request"):
resources = [
MockResource("rag://dataset/123"),
MockResource("rag://dataset/456"),
]
result = provider.query_relevant_documents("test query", resources)
# Should have 2 documents: doc1 (with 2 chunks) and doc2 (with 1 chunk)
assert len(result) == 2
doc1 = next(doc for doc in result if doc.id == "doc1")
doc2 = next(doc for doc in result if doc.id == "doc2")
assert len(doc1.chunks) == 2
assert len(doc2.chunks) == 1
class TestVikingDBKnowledgeBaseProviderListResources:
@pytest.fixture
def provider(self, env_vars):
return VikingDBKnowledgeBaseProvider()
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_list_resources_success(self, mock_request, provider):
"""Test successful resource listing"""
mock_response = MagicMock()
mock_response.text = json.dumps(
{
"code": 0,
"data": {
"collection_list": [
{
"resource_id": "123",
"collection_name": "Dataset 1",
"description": "Description 1",
},
{
"resource_id": "456",
"collection_name": "Dataset 2",
"description": "Description 2",
},
]
},
}
)
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request") as mock_prepare:
mock_req = MagicMock()
mock_prepare.return_value = mock_req
result = provider.list_resources()
assert len(result) == 2
assert result[0].uri == "rag://dataset/123"
assert result[0].title == "Dataset 1"
assert result[0].description == "Description 1"
assert result[1].uri == "rag://dataset/456"
assert result[1].title == "Dataset 2"
assert result[1].description == "Description 2"
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_list_resources_with_query_filter(self, mock_request, provider):
"""Test resource listing with query filter"""
mock_response = MagicMock()
mock_response.text = json.dumps(
{
"code": 0,
"data": {
"collection_list": [
{
"resource_id": "123",
"collection_name": "Test Dataset",
"description": "Description",
},
{
"resource_id": "456",
"collection_name": "Other Dataset",
"description": "Description",
},
]
},
}
)
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request"):
result = provider.list_resources("test")
# Should only return the dataset with "test" in the name
assert len(result) == 1
assert result[0].title == "Test Dataset"
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_list_resources_api_error(self, mock_request, provider):
"""Test handling of API error in list_resources"""
mock_response = MagicMock()
mock_response.text = json.dumps({"code": 1, "message": "API Error"})
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request"):
with pytest.raises(Exception, match="Failed to list resources: API Error"):
provider.list_resources()
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_list_resources_json_decode_error(self, mock_request, provider):
"""Test handling of JSON decode error in list_resources"""
mock_response = MagicMock()
mock_response.text = "invalid json"
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request"):
with pytest.raises(ValueError, match="Failed to parse JSON response"):
provider.list_resources()
@patch("src.rag.vikingdb_knowledge_base.requests.request")
def test_list_resources_empty_response(self, mock_request, provider):
"""Test handling of empty response"""
mock_response = MagicMock()
mock_response.text = json.dumps({"code": 0, "data": {"collection_list": []}})
mock_request.return_value = mock_response
with patch.object(provider, "prepare_request"):
result = provider.list_resources()
assert result == []
+841
View File
@@ -0,0 +1,841 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import base64
import os
from unittest.mock import MagicMock, patch, mock_open
import pytest
from fastapi.testclient import TestClient
from fastapi import HTTPException
from src.server.app import app, _make_event, _astream_workflow_generator
from src.config.report_style import ReportStyle
from langgraph.types import Command
from langchain_core.messages import ToolMessage
from langchain_core.messages import AIMessageChunk
@pytest.fixture
def client():
return TestClient(app)
class TestMakeEvent:
def test_make_event_with_content(self):
event_type = "message_chunk"
data = {"content": "Hello", "role": "assistant"}
result = _make_event(event_type, data)
expected = (
'event: message_chunk\ndata: {"content": "Hello", "role": "assistant"}\n\n'
)
assert result == expected
def test_make_event_with_empty_content(self):
event_type = "message_chunk"
data = {"content": "", "role": "assistant"}
result = _make_event(event_type, data)
expected = 'event: message_chunk\ndata: {"role": "assistant"}\n\n'
assert result == expected
def test_make_event_without_content(self):
event_type = "tool_calls"
data = {"role": "assistant", "tool_calls": []}
result = _make_event(event_type, data)
expected = (
'event: tool_calls\ndata: {"role": "assistant", "tool_calls": []}\n\n'
)
assert result == expected
class TestTTSEndpoint:
@patch.dict(
os.environ,
{
"VOLCENGINE_TTS_APPID": "test_app_id",
"VOLCENGINE_TTS_ACCESS_TOKEN": "test_token",
"VOLCENGINE_TTS_CLUSTER": "test_cluster",
"VOLCENGINE_TTS_VOICE_TYPE": "test_voice",
},
)
@patch("src.server.app.VolcengineTTS")
def test_tts_success(self, mock_tts_class, client):
mock_tts_instance = MagicMock()
mock_tts_class.return_value = mock_tts_instance
# Mock successful TTS response
audio_data_b64 = base64.b64encode(b"fake_audio_data").decode()
mock_tts_instance.text_to_speech.return_value = {
"success": True,
"audio_data": audio_data_b64,
}
request_data = {
"text": "Hello world",
"encoding": "mp3",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0,
"text_type": "plain",
"with_frontend": True,
"frontend_type": "unitTson",
}
response = client.post("/api/tts", json=request_data)
assert response.status_code == 200
assert response.headers["content-type"] == "audio/mp3"
assert b"fake_audio_data" in response.content
@patch.dict(os.environ, {}, clear=True)
def test_tts_missing_app_id(self, client):
request_data = {"text": "Hello world", "encoding": "mp3"}
response = client.post("/api/tts", json=request_data)
assert response.status_code == 400
assert "VOLCENGINE_TTS_APPID is not set" in response.json()["detail"]
@patch.dict(
os.environ,
{"VOLCENGINE_TTS_APPID": "test_app_id", "VOLCENGINE_TTS_ACCESS_TOKEN": ""},
)
def test_tts_missing_access_token(self, client):
request_data = {"text": "Hello world", "encoding": "mp3"}
response = client.post("/api/tts", json=request_data)
assert response.status_code == 400
assert "VOLCENGINE_TTS_ACCESS_TOKEN is not set" in response.json()["detail"]
@patch.dict(
os.environ,
{
"VOLCENGINE_TTS_APPID": "test_app_id",
"VOLCENGINE_TTS_ACCESS_TOKEN": "test_token",
},
)
@patch("src.server.app.VolcengineTTS")
def test_tts_api_error(self, mock_tts_class, client):
mock_tts_instance = MagicMock()
mock_tts_class.return_value = mock_tts_instance
# Mock TTS error response
mock_tts_instance.text_to_speech.return_value = {
"success": False,
"error": "TTS API error",
}
request_data = {"text": "Hello world", "encoding": "mp3"}
response = client.post("/api/tts", json=request_data)
assert response.status_code == 500
assert "Internal Server Error" in response.json()["detail"]
@pytest.mark.skip(reason="TTS server exception is catched")
@patch("src.server.app.VolcengineTTS")
def test_tts_api_exception(self, mock_tts_class, client):
mock_tts_instance = MagicMock()
mock_tts_class.return_value = mock_tts_instance
# Mock TTS error response
mock_tts_instance.side_effect = Exception("TTS API error")
request_data = {"text": "Hello world", "encoding": "mp3"}
response = client.post("/api/tts", json=request_data)
assert response.status_code == 500
assert "Internal Server Error" in response.json()["detail"]
class TestPodcastEndpoint:
@patch("src.server.app.build_podcast_graph")
def test_generate_podcast_success(self, mock_build_graph, client):
mock_workflow = MagicMock()
mock_build_graph.return_value = mock_workflow
mock_workflow.invoke.return_value = {"output": b"fake_audio_data"}
request_data = {"content": "Test content for podcast"}
response = client.post("/api/podcast/generate", json=request_data)
assert response.status_code == 200
assert response.headers["content-type"] == "audio/mp3"
assert response.content == b"fake_audio_data"
@patch("src.server.app.build_podcast_graph")
def test_generate_podcast_error(self, mock_build_graph, client):
mock_build_graph.side_effect = Exception("Podcast generation failed")
request_data = {"content": "Test content"}
response = client.post("/api/podcast/generate", json=request_data)
assert response.status_code == 500
assert response.json()["detail"] == "Internal Server Error"
class TestPPTEndpoint:
@patch("src.server.app.build_ppt_graph")
@patch("builtins.open", new_callable=mock_open, read_data=b"fake_ppt_data")
def test_generate_ppt_success(self, mock_file, mock_build_graph, client):
mock_workflow = MagicMock()
mock_build_graph.return_value = mock_workflow
mock_workflow.invoke.return_value = {
"generated_file_path": "/fake/path/test.pptx"
}
request_data = {"content": "Test content for PPT"}
response = client.post("/api/ppt/generate", json=request_data)
assert response.status_code == 200
assert (
"application/vnd.openxmlformats-officedocument.presentationml.presentation"
in response.headers["content-type"]
)
assert response.content == b"fake_ppt_data"
@patch("src.server.app.build_ppt_graph")
def test_generate_ppt_error(self, mock_build_graph, client):
mock_build_graph.side_effect = Exception("PPT generation failed")
request_data = {"content": "Test content"}
response = client.post("/api/ppt/generate", json=request_data)
assert response.status_code == 500
assert response.json()["detail"] == "Internal Server Error"
class TestEnhancePromptEndpoint:
@patch("src.server.app.build_prompt_enhancer_graph")
def test_enhance_prompt_success(self, mock_build_graph, client):
mock_workflow = MagicMock()
mock_build_graph.return_value = mock_workflow
mock_workflow.invoke.return_value = {"output": "Enhanced prompt"}
request_data = {
"prompt": "Original prompt",
"context": "Some context",
"report_style": "academic",
}
response = client.post("/api/prompt/enhance", json=request_data)
assert response.status_code == 200
assert response.json()["result"] == "Enhanced prompt"
@patch("src.server.app.build_prompt_enhancer_graph")
def test_enhance_prompt_with_different_styles(self, mock_build_graph, client):
mock_workflow = MagicMock()
mock_build_graph.return_value = mock_workflow
mock_workflow.invoke.return_value = {"output": "Enhanced prompt"}
styles = [
"ACADEMIC",
"popular_science",
"NEWS",
"social_media",
"invalid_style",
]
for style in styles:
request_data = {"prompt": "Test prompt", "report_style": style}
response = client.post("/api/prompt/enhance", json=request_data)
assert response.status_code == 200
@patch("src.server.app.build_prompt_enhancer_graph")
def test_enhance_prompt_error(self, mock_build_graph, client):
mock_build_graph.side_effect = Exception("Enhancement failed")
request_data = {"prompt": "Test prompt"}
response = client.post("/api/prompt/enhance", json=request_data)
assert response.status_code == 500
assert response.json()["detail"] == "Internal Server Error"
class TestMCPEndpoint:
@patch("src.server.app.load_mcp_tools")
@patch.dict(
os.environ,
{"ENABLE_MCP_SERVER_CONFIGURATION": "true"},
)
def test_mcp_server_metadata_success(self, mock_load_tools, client):
mock_load_tools.return_value = [
{"name": "test_tool", "description": "Test tool"}
]
request_data = {
"transport": "stdio",
"command": "test_command",
"args": ["arg1", "arg2"],
"env": {"ENV_VAR": "value"},
}
response = client.post("/api/mcp/server/metadata", json=request_data)
assert response.status_code == 200
response_data = response.json()
assert response_data["transport"] == "stdio"
assert response_data["command"] == "test_command"
assert len(response_data["tools"]) == 1
@patch("src.server.app.load_mcp_tools")
@patch.dict(
os.environ,
{"ENABLE_MCP_SERVER_CONFIGURATION": "true"},
)
def test_mcp_server_metadata_with_custom_timeout(self, mock_load_tools, client):
mock_load_tools.return_value = []
request_data = {
"transport": "stdio",
"command": "test_command",
"timeout_seconds": 600,
}
response = client.post("/api/mcp/server/metadata", json=request_data)
assert response.status_code == 200
mock_load_tools.assert_called_once()
@patch("src.server.app.load_mcp_tools")
@patch.dict(
os.environ,
{"ENABLE_MCP_SERVER_CONFIGURATION": "true"},
)
def test_mcp_server_metadata_with_exception(self, mock_load_tools, client):
mock_load_tools.side_effect = HTTPException(
status_code=400, detail="MCP Server Error"
)
request_data = {
"transport": "stdio",
"command": "test_command",
"args": ["arg1", "arg2"],
"env": {"ENV_VAR": "value"},
}
response = client.post("/api/mcp/server/metadata", json=request_data)
assert response.status_code == 500
assert response.json()["detail"] == "Internal Server Error"
@patch("src.server.app.load_mcp_tools")
@patch.dict(
os.environ,
{"ENABLE_MCP_SERVER_CONFIGURATION": ""},
)
def test_mcp_server_metadata_without_enable_configuration(
self, mock_load_tools, client
):
request_data = {
"transport": "stdio",
"command": "test_command",
"args": ["arg1", "arg2"],
"env": {"ENV_VAR": "value"},
}
response = client.post("/api/mcp/server/metadata", json=request_data)
assert response.status_code == 403
assert (
response.json()["detail"]
== "MCP server configuration is disabled. Set ENABLE_MCP_SERVER_CONFIGURATION=true to enable MCP features."
)
class TestRAGEndpoints:
@patch("src.server.app.SELECTED_RAG_PROVIDER", "test_provider")
def test_rag_config(self, client):
response = client.get("/api/rag/config")
assert response.status_code == 200
assert response.json()["provider"] == "test_provider"
@patch("src.server.app.build_retriever")
def test_rag_resources_with_retriever(self, mock_build_retriever, client):
mock_retriever = MagicMock()
mock_retriever.list_resources.return_value = [
{
"uri": "test_uri",
"title": "Test Resource",
"description": "Test Description",
}
]
mock_build_retriever.return_value = mock_retriever
response = client.get("/api/rag/resources?query=test")
assert response.status_code == 200
assert len(response.json()["resources"]) == 1
@patch("src.server.app.build_retriever")
def test_rag_resources_without_retriever(self, mock_build_retriever, client):
mock_build_retriever.return_value = None
response = client.get("/api/rag/resources")
assert response.status_code == 200
assert response.json()["resources"] == []
class TestChatStreamEndpoint:
@patch("src.server.app.graph")
def test_chat_stream_with_default_thread_id(self, mock_graph, client):
# Mock the async stream
async def mock_astream(*args, **kwargs):
yield ("agent1", "step1", {"test": "data"})
mock_graph.astream = mock_astream
request_data = {
"thread_id": "__default__",
"messages": [{"role": "user", "content": "Hello"}],
"resources": [],
"max_plan_iterations": 3,
"max_step_num": 10,
"max_search_results": 5,
"auto_accepted_plan": True,
"interrupt_feedback": "",
"mcp_settings": {},
"enable_background_investigation": False,
"report_style": "academic",
}
response = client.post("/api/chat/stream", json=request_data)
assert response.status_code == 200
assert response.headers["content-type"] == "text/event-stream; charset=utf-8"
@patch("src.server.app.graph")
def test_chat_stream_with_mcp_settings(self, mock_graph, client):
# Mock the async stream
async def mock_astream(*args, **kwargs):
yield ("agent1", "step1", {"test": "data"})
mock_graph.astream = mock_astream
request_data = {
"thread_id": "__default__",
"messages": [{"role": "user", "content": "Hello"}],
"resources": [],
"max_plan_iterations": 3,
"max_step_num": 10,
"max_search_results": 5,
"auto_accepted_plan": True,
"interrupt_feedback": "",
"mcp_settings": {
"servers": {
"mcp-github-trending": {
"transport": "stdio",
"command": "uvx",
"args": ["mcp-github-trending"],
"env": {"MCP_SERVER_ID": "mcp-github-trending"},
"enabled_tools": ["get_github_trending_repositories"],
"add_to_agents": ["researcher"],
}
}
},
"enable_background_investigation": False,
"report_style": "academic",
}
response = client.post("/api/chat/stream", json=request_data)
assert response.status_code == 403
assert (
response.json()["detail"]
== "MCP server configuration is disabled. Set ENABLE_MCP_SERVER_CONFIGURATION=true to enable MCP features."
)
@patch("src.server.app.graph")
@patch.dict(
os.environ,
{"ENABLE_MCP_SERVER_CONFIGURATION": "true"},
)
def test_chat_stream_with_mcp_settings_enabled(self, mock_graph, client):
# Mock the async stream
async def mock_astream(*args, **kwargs):
yield ("agent1", "step1", {"test": "data"})
mock_graph.astream = mock_astream
request_data = {
"thread_id": "__default__",
"messages": [{"role": "user", "content": "Hello"}],
"resources": [],
"max_plan_iterations": 3,
"max_step_num": 10,
"max_search_results": 5,
"auto_accepted_plan": True,
"interrupt_feedback": "",
"mcp_settings": {
"servers": {
"mcp-github-trending": {
"transport": "stdio",
"command": "uvx",
"args": ["mcp-github-trending"],
"env": {"MCP_SERVER_ID": "mcp-github-trending"},
"enabled_tools": ["get_github_trending_repositories"],
"add_to_agents": ["researcher"],
}
}
},
"enable_background_investigation": False,
"report_style": "academic",
}
response = client.post("/api/chat/stream", json=request_data)
assert response.status_code == 200
assert response.headers["content-type"] == "text/event-stream; charset=utf-8"
class TestAstreamWorkflowGenerator:
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_basic_flow(self, mock_graph):
# Mock AI message chunk
mock_message = AIMessageChunk(content="Hello world")
mock_message.id = "msg_123"
mock_message.response_metadata = {}
mock_message.tool_calls = []
mock_message.tool_call_chunks = []
# Mock the async stream - yield messages in the correct format
async def mock_astream(*args, **kwargs):
# Yield a tuple (message, metadata) instead of just [message]
yield ("agent1:subagent", "messages", (mock_message, {}))
mock_graph.astream = mock_astream
messages = [{"role": "user", "content": "Hello"}]
thread_id = "test_thread"
resources = []
generator = _astream_workflow_generator(
messages=messages,
thread_id=thread_id,
resources=resources,
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=True,
interrupt_feedback="",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
assert len(events) == 1
assert "event: message_chunk" in events[0]
assert "Hello world" in events[0]
# Check for the actual agent name that appears in the output
assert '"agent": "a"' in events[0]
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_with_interrupt_feedback(self, mock_graph):
# Mock the async stream
async def mock_astream(*args, **kwargs):
# Verify that Command is passed as input when interrupt_feedback is provided
assert isinstance(args[0], Command)
assert "[edit_plan] Hello" in args[0].resume
yield ("agent1", "step1", {"test": "data"})
mock_graph.astream = mock_astream
messages = [{"role": "user", "content": "Hello"}]
generator = _astream_workflow_generator(
messages=messages,
thread_id="test_thread",
resources=[],
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=False,
interrupt_feedback="edit_plan",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_interrupt_event(self, mock_graph):
# Mock interrupt data
mock_interrupt = MagicMock()
mock_interrupt.ns = ["interrupt_id"]
mock_interrupt.value = "Plan requires approval"
interrupt_data = {"__interrupt__": [mock_interrupt]}
async def mock_astream(*args, **kwargs):
yield ("agent1", "step1", interrupt_data)
mock_graph.astream = mock_astream
generator = _astream_workflow_generator(
messages=[],
thread_id="test_thread",
resources=[],
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=True,
interrupt_feedback="",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
assert len(events) == 1
assert "event: interrupt" in events[0]
assert "Plan requires approval" in events[0]
assert "interrupt_id" in events[0]
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_tool_message(self, mock_graph):
# Mock tool message
mock_tool_message = ToolMessage(content="Tool result", tool_call_id="tool_123")
mock_tool_message.id = "msg_456"
async def mock_astream(*args, **kwargs):
yield ("agent1:subagent", "step1", (mock_tool_message, {}))
mock_graph.astream = mock_astream
generator = _astream_workflow_generator(
messages=[],
thread_id="test_thread",
resources=[],
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=True,
interrupt_feedback="",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
assert len(events) == 1
assert "event: tool_call_result" in events[0]
assert "Tool result" in events[0]
assert "tool_123" in events[0]
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_ai_message_with_tool_calls(
self, mock_graph
):
# Mock AI message with tool calls
mock_ai_message = AIMessageChunk(content="Making tool call")
mock_ai_message.id = "msg_789"
mock_ai_message.response_metadata = {"finish_reason": "tool_calls"}
mock_ai_message.tool_calls = [{"name": "search", "args": {"query": "test"}}]
mock_ai_message.tool_call_chunks = [{"name": "search"}]
async def mock_astream(*args, **kwargs):
yield ("agent1:subagent", "step1", (mock_ai_message, {}))
mock_graph.astream = mock_astream
generator = _astream_workflow_generator(
messages=[],
thread_id="test_thread",
resources=[],
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=True,
interrupt_feedback="",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
assert len(events) == 1
assert "event: tool_calls" in events[0]
assert "Making tool call" in events[0]
assert "tool_calls" in events[0]
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_ai_message_with_tool_call_chunks(
self, mock_graph
):
# Mock AI message with only tool call chunks
mock_ai_message = AIMessageChunk(content="Streaming tool call")
mock_ai_message.id = "msg_101"
mock_ai_message.response_metadata = {}
mock_ai_message.tool_calls = []
mock_ai_message.tool_call_chunks = [{"name": "search", "index": 0}]
async def mock_astream(*args, **kwargs):
yield ("agent1:subagent", "step1", (mock_ai_message, {}))
mock_graph.astream = mock_astream
generator = _astream_workflow_generator(
messages=[],
thread_id="test_thread",
resources=[],
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=True,
interrupt_feedback="",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
assert len(events) == 1
assert "event: tool_call_chunks" in events[0]
assert "Streaming tool call" in events[0]
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_with_finish_reason(self, mock_graph):
# Mock AI message with finish reason
mock_ai_message = AIMessageChunk(content="Complete response")
mock_ai_message.id = "msg_finish"
mock_ai_message.response_metadata = {"finish_reason": "stop"}
mock_ai_message.tool_calls = []
mock_ai_message.tool_call_chunks = []
async def mock_astream(*args, **kwargs):
yield ("agent1:subagent", "step1", (mock_ai_message, {}))
mock_graph.astream = mock_astream
generator = _astream_workflow_generator(
messages=[],
thread_id="test_thread",
resources=[],
max_plan_iterations=3,
max_step_num=10,
max_search_results=5,
auto_accepted_plan=True,
interrupt_feedback="",
mcp_settings={},
enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False,
)
events = []
async for event in generator:
events.append(event)
assert len(events) == 1
assert "event: message_chunk" in events[0]
assert "finish_reason" in events[0]
assert "stop" in events[0]
@pytest.mark.asyncio
@patch("src.server.app.graph")
async def test_astream_workflow_generator_config_passed_correctly(self, mock_graph):
mock_ai_message = AIMessageChunk(content="Test")
mock_ai_message.id = "test_id"
mock_ai_message.response_metadata = {}
mock_ai_message.tool_calls = []
mock_ai_message.tool_call_chunks = []
async def verify_config(*args, **kwargs):
config = kwargs.get("config", {})
assert config["thread_id"] == "test_thread"
assert config["max_plan_iterations"] == 5
assert config["max_step_num"] == 20
assert config["max_search_results"] == 10
assert config["report_style"] == ReportStyle.NEWS.value
yield ("agent1", "messages", [mock_ai_message])
class TestGenerateProseEndpoint:
@patch("src.server.app.build_prose_graph")
def test_generate_prose_success(self, mock_build_graph, client):
# Mock the workflow and its astream method
mock_workflow = MagicMock()
mock_build_graph.return_value = mock_workflow
class MockEvent:
def __init__(self, content):
self.content = content
async def mock_astream(*args, **kwargs):
yield (None, [MockEvent("Generated prose 1")])
yield (None, [MockEvent("Generated prose 2")])
mock_workflow.astream.return_value = mock_astream()
request_data = {
"prompt": "Write a story.",
"option": "default",
"command": "generate",
}
response = client.post("/api/prose/generate", json=request_data)
assert response.status_code == 200
assert response.headers["content-type"].startswith("text/event-stream")
# Read the streaming response content
content = b"".join(response.iter_bytes())
assert b"Generated prose 1" in content or b"Generated prose 2" in content
@patch("src.server.app.build_prose_graph")
def test_generate_prose_error(self, mock_build_graph, client):
mock_build_graph.side_effect = Exception("Prose generation failed")
request_data = {
"prompt": "Write a story.",
"option": "default",
"command": "generate",
}
response = client.post("/api/prose/generate", json=request_data)
assert response.status_code == 500
assert response.json()["detail"] == "Internal Server Error"
+167
View File
@@ -0,0 +1,167 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from pydantic import ValidationError
from src.config.report_style import ReportStyle
from src.rag.retriever import Resource
from unittest.mock import AsyncMock, patch, MagicMock
from fastapi import HTTPException
from src.server.chat_request import (
ContentItem,
ChatMessage,
ChatRequest,
TTSRequest,
GeneratePodcastRequest,
GeneratePPTRequest,
GenerateProseRequest,
EnhancePromptRequest,
)
import src.server.mcp_utils as mcp_utils # Assuming mcp_utils is the module to test
def test_content_item_text_and_image():
item_text = ContentItem(type="text", text="hello")
assert item_text.type == "text"
assert item_text.text == "hello"
assert item_text.image_url is None
item_image = ContentItem(type="image", image_url="http://img.com/1.png")
assert item_image.type == "image"
assert item_image.text is None
assert item_image.image_url == "http://img.com/1.png"
def test_chat_message_with_string_content():
msg = ChatMessage(role="user", content="Hello!")
assert msg.role == "user"
assert msg.content == "Hello!"
def test_chat_message_with_content_items():
items = [ContentItem(type="text", text="hi")]
msg = ChatMessage(role="assistant", content=items)
assert msg.role == "assistant"
assert isinstance(msg.content, list)
assert msg.content[0].type == "text"
def test_chat_request_defaults():
req = ChatRequest()
assert req.messages == []
assert req.resources == []
assert req.debug is False
assert req.thread_id == "__default__"
assert req.max_plan_iterations == 1
assert req.max_step_num == 3
assert req.max_search_results == 3
assert req.auto_accepted_plan is False
assert req.interrupt_feedback is None
assert req.mcp_settings is None
assert req.enable_background_investigation is True
assert req.report_style == ReportStyle.ACADEMIC
def test_chat_request_with_values():
resource = Resource(
name="test", type="doc", uri="some-uri-value", title="some-title-value"
)
msg = ChatMessage(role="user", content="hi")
req = ChatRequest(
messages=[msg],
resources=[resource],
debug=True,
thread_id="tid",
max_plan_iterations=2,
max_step_num=5,
max_search_results=10,
auto_accepted_plan=True,
interrupt_feedback="stop",
mcp_settings={"foo": "bar"},
enable_background_investigation=False,
report_style="academic",
)
assert req.messages[0].role == "user"
assert req.debug is True
assert req.thread_id == "tid"
assert req.max_plan_iterations == 2
assert req.max_step_num == 5
assert req.max_search_results == 10
assert req.auto_accepted_plan is True
assert req.interrupt_feedback == "stop"
assert req.mcp_settings == {"foo": "bar"}
assert req.enable_background_investigation is False
assert req.report_style == ReportStyle.ACADEMIC
def test_tts_request_defaults():
req = TTSRequest(text="hello")
assert req.text == "hello"
assert req.voice_type == "BV700_V2_streaming"
assert req.encoding == "mp3"
assert req.speed_ratio == 1.0
assert req.volume_ratio == 1.0
assert req.pitch_ratio == 1.0
assert req.text_type == "plain"
assert req.with_frontend == 1
assert req.frontend_type == "unitTson"
def test_generate_podcast_request():
req = GeneratePodcastRequest(content="Podcast content")
assert req.content == "Podcast content"
def test_generate_ppt_request():
req = GeneratePPTRequest(content="PPT content")
assert req.content == "PPT content"
def test_generate_prose_request():
req = GenerateProseRequest(prompt="Write a poem", option="poet", command="rhyme")
assert req.prompt == "Write a poem"
assert req.option == "poet"
assert req.command == "rhyme"
req2 = GenerateProseRequest(prompt="Write", option="short")
assert req2.command == ""
def test_enhance_prompt_request_defaults():
req = EnhancePromptRequest(prompt="Improve this")
assert req.prompt == "Improve this"
assert req.context == ""
assert req.report_style == "academic"
def test_content_item_validation_error():
with pytest.raises(ValidationError):
ContentItem() # missing required 'type'
def test_chat_message_validation_error():
with pytest.raises(ValidationError):
ChatMessage(role="user") # missing content
def test_tts_request_validation_error():
with pytest.raises(ValidationError):
TTSRequest() # missing required 'text'
@pytest.mark.asyncio
@patch("src.server.mcp_utils._get_tools_from_client_session", new_callable=AsyncMock)
@patch("src.server.mcp_utils.StdioServerParameters")
@patch("src.server.mcp_utils.stdio_client")
async def test_load_mcp_tools_exception_handling(
mock_stdio_client, mock_StdioServerParameters, mock_get_tools
): # Changed to async def
mock_get_tools.side_effect = Exception("unexpected error")
mock_StdioServerParameters.return_value = MagicMock()
mock_stdio_client.return_value = MagicMock()
with pytest.raises(HTTPException) as exc:
await mcp_utils.load_mcp_tools(server_type="stdio", command="foo") # Use await
assert exc.value.status_code == 500
assert "unexpected error" in exc.value.detail
+73
View File
@@ -0,0 +1,73 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from pydantic import ValidationError
from src.server.mcp_request import MCPServerMetadataRequest, MCPServerMetadataResponse
def test_mcp_server_metadata_request_required_fields():
# 'transport' is required
req = MCPServerMetadataRequest(transport="stdio")
assert req.transport == "stdio"
assert req.command is None
assert req.args is None
assert req.url is None
assert req.env is None
assert req.timeout_seconds is None
def test_mcp_server_metadata_request_optional_fields():
req = MCPServerMetadataRequest(
transport="sse",
command="run",
args=["--foo", "bar"],
url="http://localhost:8080",
env={"FOO": "BAR"},
timeout_seconds=30,
)
assert req.transport == "sse"
assert req.command == "run"
assert req.args == ["--foo", "bar"]
assert req.url == "http://localhost:8080"
assert req.env == {"FOO": "BAR"}
assert req.timeout_seconds == 30
def test_mcp_server_metadata_request_missing_transport():
with pytest.raises(ValidationError):
MCPServerMetadataRequest()
def test_mcp_server_metadata_response_required_fields():
resp = MCPServerMetadataResponse(transport="stdio")
assert resp.transport == "stdio"
assert resp.command is None
assert resp.args is None
assert resp.url is None
assert resp.env is None
assert resp.tools == []
def test_mcp_server_metadata_response_optional_fields():
resp = MCPServerMetadataResponse(
transport="sse",
command="run",
args=["--foo", "bar"],
url="http://localhost:8080",
env={"FOO": "BAR"},
tools=["tool1", "tool2"],
)
assert resp.transport == "sse"
assert resp.command == "run"
assert resp.args == ["--foo", "bar"]
assert resp.url == "http://localhost:8080"
assert resp.env == {"FOO": "BAR"}
assert resp.tools == ["tool1", "tool2"]
def test_mcp_server_metadata_response_tools_default_factory():
resp1 = MCPServerMetadataResponse(transport="stdio")
resp2 = MCPServerMetadataResponse(transport="stdio")
resp1.tools.append("toolA")
assert resp2.tools == [] # Should not share list between instances
+121
View File
@@ -0,0 +1,121 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from unittest.mock import AsyncMock, patch, MagicMock
from fastapi import HTTPException
import src.server.mcp_utils as mcp_utils
@pytest.mark.asyncio
@patch("src.server.mcp_utils.ClientSession")
async def test__get_tools_from_client_session_success(mock_ClientSession):
mock_read = AsyncMock()
mock_write = AsyncMock()
mock_context_manager = AsyncMock()
mock_context_manager.__aenter__.return_value = (mock_read, mock_write)
mock_context_manager.__aexit__.return_value = None
mock_session = AsyncMock()
mock_session.__aenter__.return_value = mock_session
mock_session.__aexit__.return_value = None
mock_session.initialize = AsyncMock()
mock_tools_obj = MagicMock()
mock_tools_obj.tools = ["tool1", "tool2"]
mock_session.list_tools = AsyncMock(return_value=mock_tools_obj)
mock_ClientSession.return_value = mock_session
result = await mcp_utils._get_tools_from_client_session(
mock_context_manager, timeout_seconds=5
)
assert result == ["tool1", "tool2"]
mock_session.initialize.assert_awaited_once()
mock_session.list_tools.assert_awaited_once()
@pytest.mark.asyncio
@patch("src.server.mcp_utils._get_tools_from_client_session", new_callable=AsyncMock)
@patch("src.server.mcp_utils.StdioServerParameters")
@patch("src.server.mcp_utils.stdio_client")
async def test_load_mcp_tools_stdio_success(
mock_stdio_client, mock_StdioServerParameters, mock_get_tools
):
mock_get_tools.return_value = ["toolA"]
params = MagicMock()
mock_StdioServerParameters.return_value = params
mock_client = MagicMock()
mock_stdio_client.return_value = mock_client
result = await mcp_utils.load_mcp_tools(
server_type="stdio",
command="echo",
args=["foo"],
env={"FOO": "BAR"},
timeout_seconds=3,
)
assert result == ["toolA"]
mock_StdioServerParameters.assert_called_once_with(
command="echo", args=["foo"], env={"FOO": "BAR"}
)
mock_stdio_client.assert_called_once_with(params)
mock_get_tools.assert_awaited_once_with(mock_client, 3)
@pytest.mark.asyncio
async def test_load_mcp_tools_stdio_missing_command():
with pytest.raises(HTTPException) as exc:
await mcp_utils.load_mcp_tools(server_type="stdio")
assert exc.value.status_code == 400
assert "Command is required" in exc.value.detail
@pytest.mark.asyncio
@patch("src.server.mcp_utils._get_tools_from_client_session", new_callable=AsyncMock)
@patch("src.server.mcp_utils.sse_client")
async def test_load_mcp_tools_sse_success(mock_sse_client, mock_get_tools):
mock_get_tools.return_value = ["toolB"]
mock_client = MagicMock()
mock_sse_client.return_value = mock_client
result = await mcp_utils.load_mcp_tools(
server_type="sse",
url="http://localhost:1234",
timeout_seconds=7,
)
assert result == ["toolB"]
mock_sse_client.assert_called_once_with(url="http://localhost:1234")
mock_get_tools.assert_awaited_once_with(mock_client, 7)
@pytest.mark.asyncio
async def test_load_mcp_tools_sse_missing_url():
with pytest.raises(HTTPException) as exc:
await mcp_utils.load_mcp_tools(server_type="sse")
assert exc.value.status_code == 400
assert "URL is required" in exc.value.detail
@pytest.mark.asyncio
async def test_load_mcp_tools_unsupported_type():
with pytest.raises(HTTPException) as exc:
await mcp_utils.load_mcp_tools(server_type="unknown")
assert exc.value.status_code == 400
assert "Unsupported server type" in exc.value.detail
@pytest.mark.asyncio
@patch("src.server.mcp_utils._get_tools_from_client_session", new_callable=AsyncMock)
@patch("src.server.mcp_utils.StdioServerParameters")
@patch("src.server.mcp_utils.stdio_client")
async def test_load_mcp_tools_exception_handling(
mock_stdio_client, mock_StdioServerParameters, mock_get_tools
):
mock_get_tools.side_effect = Exception("unexpected error")
mock_StdioServerParameters.return_value = MagicMock()
mock_stdio_client.return_value = MagicMock()
with pytest.raises(HTTPException) as exc:
await mcp_utils.load_mcp_tools(server_type="stdio", command="foo")
assert exc.value.status_code == 500
assert "unexpected error" in exc.value.detail
+109
View File
@@ -0,0 +1,109 @@
from unittest.mock import Mock, patch
from src.tools.crawl import crawl_tool
class TestCrawlTool:
@patch("src.tools.crawl.Crawler")
def test_crawl_tool_success(self, mock_crawler_class):
# Arrange
mock_crawler = Mock()
mock_article = Mock()
mock_article.to_markdown.return_value = (
"# Test Article\nThis is test content." * 100
)
mock_crawler.crawl.return_value = mock_article
mock_crawler_class.return_value = mock_crawler
url = "https://example.com"
# Act
result = crawl_tool(url)
# Assert
assert isinstance(result, dict)
assert result["url"] == url
assert "crawled_content" in result
assert len(result["crawled_content"]) <= 1000
mock_crawler_class.assert_called_once()
mock_crawler.crawl.assert_called_once_with(url)
mock_article.to_markdown.assert_called_once()
@patch("src.tools.crawl.Crawler")
def test_crawl_tool_short_content(self, mock_crawler_class):
# Arrange
mock_crawler = Mock()
mock_article = Mock()
short_content = "Short content"
mock_article.to_markdown.return_value = short_content
mock_crawler.crawl.return_value = mock_article
mock_crawler_class.return_value = mock_crawler
url = "https://example.com"
# Act
result = crawl_tool(url)
# Assert
assert result["crawled_content"] == short_content
@patch("src.tools.crawl.Crawler")
@patch("src.tools.crawl.logger")
def test_crawl_tool_crawler_exception(self, mock_logger, mock_crawler_class):
# Arrange
mock_crawler = Mock()
mock_crawler.crawl.side_effect = Exception("Network error")
mock_crawler_class.return_value = mock_crawler
url = "https://example.com"
# Act
result = crawl_tool(url)
# Assert
assert isinstance(result, str)
assert "Failed to crawl" in result
assert "Network error" in result
mock_logger.error.assert_called_once()
@patch("src.tools.crawl.Crawler")
@patch("src.tools.crawl.logger")
def test_crawl_tool_crawler_instantiation_exception(
self, mock_logger, mock_crawler_class
):
# Arrange
mock_crawler_class.side_effect = Exception("Crawler init error")
url = "https://example.com"
# Act
result = crawl_tool(url)
# Assert
assert isinstance(result, str)
assert "Failed to crawl" in result
assert "Crawler init error" in result
mock_logger.error.assert_called_once()
@patch("src.tools.crawl.Crawler")
@patch("src.tools.crawl.logger")
def test_crawl_tool_markdown_conversion_exception(
self, mock_logger, mock_crawler_class
):
# Arrange
mock_crawler = Mock()
mock_article = Mock()
mock_article.to_markdown.side_effect = Exception("Markdown conversion error")
mock_crawler.crawl.return_value = mock_article
mock_crawler_class.return_value = mock_crawler
url = "https://example.com"
# Act
result = crawl_tool(url)
# Assert
assert isinstance(result, str)
assert "Failed to crawl" in result
assert "Markdown conversion error" in result
mock_logger.error.assert_called_once()
+119
View File
@@ -0,0 +1,119 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from unittest.mock import Mock, call, patch
from src.tools.decorators import create_logged_tool
class MockBaseTool:
"""Mock base tool class for testing."""
def _run(self, *args, **kwargs):
return "base_result"
class TestLoggedToolMixin:
def test_run_calls_log_operation(self):
"""Test that _run calls _log_operation with correct parameters."""
# Create a logged tool instance
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
# Mock the _log_operation method
tool._log_operation = Mock()
# Call _run with test parameters
args = ("arg1", "arg2")
kwargs = {"key1": "value1", "key2": "value2"}
tool._run(*args, **kwargs)
# Verify _log_operation was called with correct parameters
tool._log_operation.assert_called_once_with("_run", *args, **kwargs)
def test_run_calls_super_run(self):
"""Test that _run calls the parent class _run method."""
# Create a logged tool instance
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
# Mock the parent _run method
with patch.object(
MockBaseTool, "_run", return_value="mocked_result"
) as mock_super_run:
args = ("arg1", "arg2")
kwargs = {"key1": "value1"}
result = tool._run(*args, **kwargs)
# Verify super()._run was called with correct parameters
mock_super_run.assert_called_once_with(*args, **kwargs)
# Verify the result is returned
assert result == "mocked_result"
def test_run_logs_result(self):
"""Test that _run logs the result with debug level."""
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
with patch("src.tools.decorators.logger.debug") as mock_debug:
tool._run("test_arg")
# Verify debug log was called with correct message
mock_debug.assert_has_calls(
[
call("Tool MockBaseTool._run called with parameters: test_arg"),
call("Tool MockBaseTool returned: base_result"),
]
)
def test_run_returns_super_result(self):
"""Test that _run returns the result from parent class."""
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
result = tool._run()
assert result == "base_result"
def test_run_with_no_args(self):
"""Test _run method with no arguments."""
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
with patch("src.tools.decorators.logger.debug") as mock_debug:
tool._log_operation = Mock()
result = tool._run()
# Verify _log_operation called with no args
tool._log_operation.assert_called_once_with("_run")
# Verify result logging
mock_debug.assert_called_once()
assert result == "base_result"
def test_run_with_mixed_args_kwargs(self):
"""Test _run method with both positional and keyword arguments."""
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
tool._log_operation = Mock()
args = ("pos1", "pos2")
kwargs = {"kw1": "val1", "kw2": "val2"}
result = tool._run(*args, **kwargs)
# Verify all arguments passed correctly
tool._log_operation.assert_called_once_with("_run", *args, **kwargs)
assert result == "base_result"
def test_run_class_name_replacement(self):
"""Test that class name 'Logged' prefix is correctly removed in logging."""
LoggedTool = create_logged_tool(MockBaseTool)
tool = LoggedTool()
with patch("src.tools.decorators.logger.debug") as mock_debug:
tool._run()
# Verify the logged class name has 'Logged' prefix removed
call_args = mock_debug.call_args[0][0]
assert "Tool MockBaseTool returned:" in call_args
assert "LoggedMockBaseTool" not in call_args
+147
View File
@@ -0,0 +1,147 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from unittest.mock import patch
from src.tools.python_repl import python_repl_tool
class TestPythonReplTool:
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_successful_code_execution(self, mock_logger, mock_repl):
# Arrange
code = "print('Hello, World!')"
expected_output = "Hello, World!\n"
mock_repl.run.return_value = expected_output
# Act
result = python_repl_tool(code)
# Assert
mock_repl.run.assert_called_once_with(code)
mock_logger.info.assert_called_with("Code execution successful")
assert "Successfully executed:" in result
assert code in result
assert expected_output in result
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_invalid_input_type(self, mock_logger, mock_repl):
# Arrange
invalid_code = 123
# Act & Assert - expect ValidationError from LangChain
with pytest.raises(Exception) as exc_info:
python_repl_tool(invalid_code)
# Verify that it's a validation error
assert "ValidationError" in str(
type(exc_info.value)
) or "validation error" in str(exc_info.value)
# The REPL should not be called since validation fails first
mock_repl.run.assert_not_called()
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_code_execution_with_error_in_result(self, mock_logger, mock_repl):
# Arrange
code = "invalid_function()"
error_result = "NameError: name 'invalid_function' is not defined"
mock_repl.run.return_value = error_result
# Act
result = python_repl_tool(code)
# Assert
mock_repl.run.assert_called_once_with(code)
mock_logger.error.assert_called_with(error_result)
assert "Error executing code:" in result
assert code in result
assert error_result in result
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_code_execution_with_exception_in_result(self, mock_logger, mock_repl):
# Arrange
code = "1/0"
exception_result = "ZeroDivisionError: division by zero"
mock_repl.run.return_value = exception_result
# Act
result = python_repl_tool(code)
# Assert
mock_repl.run.assert_called_once_with(code)
mock_logger.error.assert_called_with(exception_result)
assert "Error executing code:" in result
assert code in result
assert exception_result in result
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_code_execution_raises_exception(self, mock_logger, mock_repl):
# Arrange
code = "print('test')"
exception = RuntimeError("REPL failed")
mock_repl.run.side_effect = exception
# Act
result = python_repl_tool(code)
# Assert
mock_repl.run.assert_called_once_with(code)
mock_logger.error.assert_called_with(repr(exception))
assert "Error executing code:" in result
assert code in result
assert repr(exception) in result
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_successful_execution_with_calculation(self, mock_logger, mock_repl):
# Arrange
code = "result = 2 + 3\nprint(result)"
expected_output = "5\n"
mock_repl.run.return_value = expected_output
# Act
result = python_repl_tool(code)
# Assert
mock_repl.run.assert_called_once_with(code)
mock_logger.info.assert_any_call("Executing Python code")
mock_logger.info.assert_any_call("Code execution successful")
assert "Successfully executed:" in result
assert code in result
assert expected_output in result
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_empty_string_code(self, mock_logger, mock_repl):
# Arrange
code = ""
mock_repl.run.return_value = ""
# Act
result = python_repl_tool(code)
# Assert
mock_repl.run.assert_called_once_with(code)
mock_logger.info.assert_called_with("Code execution successful")
assert "Successfully executed:" in result
@patch("src.tools.python_repl.repl")
@patch("src.tools.python_repl.logger")
def test_logging_calls(self, mock_logger, mock_repl):
# Arrange
code = "x = 1"
mock_repl.run.return_value = ""
# Act
python_repl_tool(code)
# Assert
mock_logger.info.assert_any_call("Executing Python code")
mock_logger.info.assert_any_call("Code execution successful")
+54
View File
@@ -0,0 +1,54 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import pytest
from unittest.mock import patch
from src.tools.search import get_web_search_tool
from src.config import SearchEngine
class TestGetWebSearchTool:
@patch("src.tools.search.SELECTED_SEARCH_ENGINE", SearchEngine.TAVILY.value)
def test_get_web_search_tool_tavily(self):
tool = get_web_search_tool(max_search_results=5)
assert tool.name == "web_search"
assert tool.max_results == 5
assert tool.include_raw_content is True
assert tool.include_images is True
assert tool.include_image_descriptions is True
@patch("src.tools.search.SELECTED_SEARCH_ENGINE", SearchEngine.DUCKDUCKGO.value)
def test_get_web_search_tool_duckduckgo(self):
tool = get_web_search_tool(max_search_results=3)
assert tool.name == "web_search"
assert tool.max_results == 3
@patch("src.tools.search.SELECTED_SEARCH_ENGINE", SearchEngine.BRAVE_SEARCH.value)
@patch.dict(os.environ, {"BRAVE_SEARCH_API_KEY": "test_api_key"})
def test_get_web_search_tool_brave(self):
tool = get_web_search_tool(max_search_results=4)
assert tool.name == "web_search"
assert tool.search_wrapper.api_key == "test_api_key"
@patch("src.tools.search.SELECTED_SEARCH_ENGINE", SearchEngine.ARXIV.value)
def test_get_web_search_tool_arxiv(self):
tool = get_web_search_tool(max_search_results=2)
assert tool.name == "web_search"
assert tool.api_wrapper.top_k_results == 2
assert tool.api_wrapper.load_max_docs == 2
assert tool.api_wrapper.load_all_available_meta is True
@patch("src.tools.search.SELECTED_SEARCH_ENGINE", "unsupported_engine")
def test_get_web_search_tool_unsupported_engine(self):
with pytest.raises(
ValueError, match="Unsupported search engine: unsupported_engine"
):
get_web_search_tool(max_search_results=1)
@patch("src.tools.search.SELECTED_SEARCH_ENGINE", SearchEngine.BRAVE_SEARCH.value)
@patch.dict(os.environ, {}, clear=True)
def test_get_web_search_tool_brave_no_api_key(self):
tool = get_web_search_tool(max_search_results=1)
assert tool.search_wrapper.api_key == ""
@@ -0,0 +1,206 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import json
import pytest
from unittest.mock import Mock, patch, AsyncMock, MagicMock
import requests
from src.tools.tavily_search.tavily_search_api_wrapper import (
EnhancedTavilySearchAPIWrapper,
)
class TestEnhancedTavilySearchAPIWrapper:
@pytest.fixture
def wrapper(self):
with patch(
"src.tools.tavily_search.tavily_search_api_wrapper.OriginalTavilySearchAPIWrapper"
):
wrapper = EnhancedTavilySearchAPIWrapper(tavily_api_key="dummy-key")
# The parent class is mocked, so initialization won't fail
return wrapper
@pytest.fixture
def mock_response_data(self):
return {
"results": [
{
"title": "Test Title",
"url": "https://example.com",
"content": "Test content",
"score": 0.9,
"raw_content": "Raw test content",
}
],
"images": [
{
"url": "https://example.com/image.jpg",
"description": "Test image description",
}
],
}
@patch("src.tools.tavily_search.tavily_search_api_wrapper.requests.post")
def test_raw_results_success(self, mock_post, wrapper, mock_response_data):
mock_response = Mock()
mock_response.json.return_value = mock_response_data
mock_response.raise_for_status.return_value = None
mock_post.return_value = mock_response
result = wrapper.raw_results("test query", max_results=10)
assert result == mock_response_data
mock_post.assert_called_once()
call_args = mock_post.call_args
assert "json" in call_args.kwargs
assert call_args.kwargs["json"]["query"] == "test query"
assert call_args.kwargs["json"]["max_results"] == 10
@patch("src.tools.tavily_search.tavily_search_api_wrapper.requests.post")
def test_raw_results_with_all_parameters(
self, mock_post, wrapper, mock_response_data
):
mock_response = Mock()
mock_response.json.return_value = mock_response_data
mock_response.raise_for_status.return_value = None
mock_post.return_value = mock_response
result = wrapper.raw_results(
"test query",
max_results=3,
search_depth="basic",
include_domains=["example.com"],
exclude_domains=["spam.com"],
include_answer=True,
include_raw_content=True,
include_images=True,
include_image_descriptions=True,
)
assert result == mock_response_data
call_args = mock_post.call_args
params = call_args.kwargs["json"]
assert params["include_domains"] == ["example.com"]
assert params["exclude_domains"] == ["spam.com"]
assert params["include_answer"] is True
assert params["include_raw_content"] is True
@patch("src.tools.tavily_search.tavily_search_api_wrapper.requests.post")
def test_raw_results_http_error(self, mock_post, wrapper):
mock_response = Mock()
mock_response.raise_for_status.side_effect = requests.HTTPError("API Error")
mock_post.return_value = mock_response
with pytest.raises(requests.HTTPError):
wrapper.raw_results("test query")
@pytest.mark.asyncio
async def test_raw_results_async_success(self, wrapper, mock_response_data):
# Create a mock that acts as both the response and its context manager
mock_response_cm = AsyncMock()
mock_response_cm.__aenter__ = AsyncMock(return_value=mock_response_cm)
mock_response_cm.__aexit__ = AsyncMock(return_value=None)
mock_response_cm.status = 200
mock_response_cm.text = AsyncMock(return_value=json.dumps(mock_response_data))
# Create mock session that returns the context manager
mock_session = AsyncMock()
mock_session.post = MagicMock(
return_value=mock_response_cm
) # Use MagicMock, not AsyncMock
# Create mock session class
mock_session_cm = AsyncMock()
mock_session_cm.__aenter__ = AsyncMock(return_value=mock_session)
mock_session_cm.__aexit__ = AsyncMock(return_value=None)
with patch(
"src.tools.tavily_search.tavily_search_api_wrapper.aiohttp.ClientSession",
return_value=mock_session_cm,
):
result = await wrapper.raw_results_async("test query")
assert result == mock_response_data
@pytest.mark.asyncio
async def test_raw_results_async_error(self, wrapper):
# Create a mock that acts as both the response and its context manager
mock_response_cm = AsyncMock()
mock_response_cm.__aenter__ = AsyncMock(return_value=mock_response_cm)
mock_response_cm.__aexit__ = AsyncMock(return_value=None)
mock_response_cm.status = 400
mock_response_cm.reason = "Bad Request"
# Create mock session that returns the context manager
mock_session = AsyncMock()
mock_session.post = MagicMock(
return_value=mock_response_cm
) # Use MagicMock, not AsyncMock
# Create mock session class
mock_session_cm = AsyncMock()
mock_session_cm.__aenter__ = AsyncMock(return_value=mock_session)
mock_session_cm.__aexit__ = AsyncMock(return_value=None)
with patch(
"src.tools.tavily_search.tavily_search_api_wrapper.aiohttp.ClientSession",
return_value=mock_session_cm,
):
with pytest.raises(Exception, match="Error 400: Bad Request"):
await wrapper.raw_results_async("test query")
def test_clean_results_with_images(self, wrapper, mock_response_data):
result = wrapper.clean_results_with_images(mock_response_data)
assert len(result) == 2
# Test page result
page_result = result[0]
assert page_result["type"] == "page"
assert page_result["title"] == "Test Title"
assert page_result["url"] == "https://example.com"
assert page_result["content"] == "Test content"
assert page_result["score"] == 0.9
assert page_result["raw_content"] == "Raw test content"
# Test image result
image_result = result[1]
assert image_result["type"] == "image"
assert image_result["image_url"] == "https://example.com/image.jpg"
assert image_result["image_description"] == "Test image description"
def test_clean_results_without_raw_content(self, wrapper):
data = {
"results": [
{
"title": "Test Title",
"url": "https://example.com",
"content": "Test content",
"score": 0.9,
}
],
"images": [],
}
result = wrapper.clean_results_with_images(data)
assert len(result) == 1
assert "raw_content" not in result[0]
def test_clean_results_empty_images(self, wrapper):
data = {
"results": [
{
"title": "Test Title",
"url": "https://example.com",
"content": "Test content",
"score": 0.9,
}
],
"images": [],
}
result = wrapper.clean_results_with_images(data)
assert len(result) == 1
assert result[0]["type"] == "page"
@@ -0,0 +1,264 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import json
import pytest
from unittest.mock import Mock, patch, AsyncMock
from src.tools.tavily_search.tavily_search_results_with_images import (
TavilySearchResultsWithImages,
)
from src.tools.tavily_search.tavily_search_api_wrapper import (
EnhancedTavilySearchAPIWrapper,
)
class TestTavilySearchResultsWithImages:
@pytest.fixture
def mock_api_wrapper(self):
"""Create a mock API wrapper."""
wrapper = Mock(spec=EnhancedTavilySearchAPIWrapper)
return wrapper
@pytest.fixture
def search_tool(self, mock_api_wrapper):
"""Create a TavilySearchResultsWithImages instance with mocked dependencies."""
tool = TavilySearchResultsWithImages(
max_results=5,
include_answer=True,
include_raw_content=True,
include_images=True,
include_image_descriptions=True,
)
tool.api_wrapper = mock_api_wrapper
return tool
@pytest.fixture
def sample_raw_results(self):
"""Sample raw results from Tavily API."""
return {
"query": "test query",
"answer": "Test answer",
"images": ["https://example.com/image1.jpg"],
"results": [
{
"title": "Test Title",
"url": "https://example.com",
"content": "Test content",
"score": 0.95,
"raw_content": "Raw test content",
}
],
"response_time": 1.5,
}
@pytest.fixture
def sample_cleaned_results(self):
"""Sample cleaned results."""
return [
{
"title": "Test Title",
"url": "https://example.com",
"content": "Test content",
}
]
def test_init_default_values(self):
"""Test initialization with default values."""
tool = TavilySearchResultsWithImages()
assert tool.include_image_descriptions is False
assert isinstance(tool.api_wrapper, EnhancedTavilySearchAPIWrapper)
def test_init_custom_values(self):
"""Test initialization with custom values."""
tool = TavilySearchResultsWithImages(
max_results=10, include_image_descriptions=True
)
assert tool.max_results == 10
assert tool.include_image_descriptions is True
@patch("builtins.print")
def test_run_success(
self,
mock_print,
search_tool,
mock_api_wrapper,
sample_raw_results,
sample_cleaned_results,
):
"""Test successful synchronous run."""
mock_api_wrapper.raw_results.return_value = sample_raw_results
mock_api_wrapper.clean_results_with_images.return_value = sample_cleaned_results
result, raw = search_tool._run("test query")
assert result == sample_cleaned_results
assert raw == sample_raw_results
mock_api_wrapper.raw_results.assert_called_once_with(
"test query",
search_tool.max_results,
search_tool.search_depth,
search_tool.include_domains,
search_tool.exclude_domains,
search_tool.include_answer,
search_tool.include_raw_content,
search_tool.include_images,
search_tool.include_image_descriptions,
)
mock_api_wrapper.clean_results_with_images.assert_called_once_with(
sample_raw_results
)
mock_print.assert_called_once()
@patch("builtins.print")
def test_run_exception(self, mock_print, search_tool, mock_api_wrapper):
"""Test synchronous run with exception."""
mock_api_wrapper.raw_results.side_effect = Exception("API Error")
result, raw = search_tool._run("test query")
assert "API Error" in result
assert raw == {}
mock_api_wrapper.clean_results_with_images.assert_not_called()
@pytest.mark.asyncio
@patch("builtins.print")
async def test_arun_success(
self,
mock_print,
search_tool,
mock_api_wrapper,
sample_raw_results,
sample_cleaned_results,
):
"""Test successful asynchronous run."""
mock_api_wrapper.raw_results_async = AsyncMock(return_value=sample_raw_results)
mock_api_wrapper.clean_results_with_images.return_value = sample_cleaned_results
result, raw = await search_tool._arun("test query")
assert result == sample_cleaned_results
assert raw == sample_raw_results
mock_api_wrapper.raw_results_async.assert_called_once_with(
"test query",
search_tool.max_results,
search_tool.search_depth,
search_tool.include_domains,
search_tool.exclude_domains,
search_tool.include_answer,
search_tool.include_raw_content,
search_tool.include_images,
search_tool.include_image_descriptions,
)
mock_api_wrapper.clean_results_with_images.assert_called_once_with(
sample_raw_results
)
mock_print.assert_called_once()
@pytest.mark.asyncio
@patch("builtins.print")
async def test_arun_exception(self, mock_print, search_tool, mock_api_wrapper):
"""Test asynchronous run with exception."""
mock_api_wrapper.raw_results_async = AsyncMock(
side_effect=Exception("Async API Error")
)
result, raw = await search_tool._arun("test query")
assert "Async API Error" in result
assert raw == {}
mock_api_wrapper.clean_results_with_images.assert_not_called()
@patch("builtins.print")
def test_run_with_run_manager(
self,
mock_print,
search_tool,
mock_api_wrapper,
sample_raw_results,
sample_cleaned_results,
):
"""Test run with callback manager."""
mock_run_manager = Mock()
mock_api_wrapper.raw_results.return_value = sample_raw_results
mock_api_wrapper.clean_results_with_images.return_value = sample_cleaned_results
result, raw = search_tool._run("test query", run_manager=mock_run_manager)
assert result == sample_cleaned_results
assert raw == sample_raw_results
@pytest.mark.asyncio
@patch("builtins.print")
async def test_arun_with_run_manager(
self,
mock_print,
search_tool,
mock_api_wrapper,
sample_raw_results,
sample_cleaned_results,
):
"""Test async run with callback manager."""
mock_run_manager = Mock()
mock_api_wrapper.raw_results_async = AsyncMock(return_value=sample_raw_results)
mock_api_wrapper.clean_results_with_images.return_value = sample_cleaned_results
result, raw = await search_tool._arun(
"test query", run_manager=mock_run_manager
)
assert result == sample_cleaned_results
assert raw == sample_raw_results
@patch("builtins.print")
def test_print_output_format(
self,
mock_print,
search_tool,
mock_api_wrapper,
sample_raw_results,
sample_cleaned_results,
):
"""Test that print outputs correctly formatted JSON."""
mock_api_wrapper.raw_results.return_value = sample_raw_results
mock_api_wrapper.clean_results_with_images.return_value = sample_cleaned_results
search_tool._run("test query")
# Verify print was called with expected format
call_args = mock_print.call_args[0]
assert call_args[0] == "sync"
assert isinstance(call_args[1], str) # Should be JSON string
# Verify it's valid JSON
json_data = json.loads(call_args[1])
assert json_data == sample_cleaned_results
@pytest.mark.asyncio
@patch("builtins.print")
async def test_async_print_output_format(
self,
mock_print,
search_tool,
mock_api_wrapper,
sample_raw_results,
sample_cleaned_results,
):
"""Test that async print outputs correctly formatted JSON."""
mock_api_wrapper.raw_results_async = AsyncMock(return_value=sample_raw_results)
mock_api_wrapper.clean_results_with_images.return_value = sample_cleaned_results
await search_tool._arun("test query")
# Verify print was called with expected format
call_args = mock_print.call_args[0]
assert call_args[0] == "async"
assert isinstance(call_args[1], str) # Should be JSON string
# Verify it's valid JSON
json_data = json.loads(call_args[1])
assert json_data == sample_cleaned_results
+122
View File
@@ -0,0 +1,122 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from unittest.mock import Mock, patch
from langchain_core.callbacks import (
CallbackManagerForToolRun,
AsyncCallbackManagerForToolRun,
)
import pytest
from src.tools.retriever import RetrieverInput, RetrieverTool, get_retriever_tool
from src.rag import Document, Retriever, Resource, Chunk
def test_retriever_input_model():
input_data = RetrieverInput(keywords="test keywords")
assert input_data.keywords == "test keywords"
def test_retriever_tool_init():
mock_retriever = Mock(spec=Retriever)
resources = [Resource(uri="test://uri", title="Test")]
tool = RetrieverTool(retriever=mock_retriever, resources=resources)
assert tool.name == "local_search_tool"
assert "retrieving information" in tool.description
assert tool.args_schema == RetrieverInput
assert tool.retriever == mock_retriever
assert tool.resources == resources
def test_retriever_tool_run_with_results():
mock_retriever = Mock(spec=Retriever)
chunk = Chunk(content="test content", similarity=0.9)
doc = Document(id="doc1", chunks=[chunk])
mock_retriever.query_relevant_documents.return_value = [doc]
resources = [Resource(uri="test://uri", title="Test")]
tool = RetrieverTool(retriever=mock_retriever, resources=resources)
result = tool._run("test keywords")
mock_retriever.query_relevant_documents.assert_called_once_with(
"test keywords", resources
)
assert isinstance(result, list)
assert len(result) == 1
assert result[0] == doc.to_dict()
def test_retriever_tool_run_no_results():
mock_retriever = Mock(spec=Retriever)
mock_retriever.query_relevant_documents.return_value = []
resources = [Resource(uri="test://uri", title="Test")]
tool = RetrieverTool(retriever=mock_retriever, resources=resources)
result = tool._run("test keywords")
assert result == "No results found from the local knowledge base."
@pytest.mark.asyncio
async def test_retriever_tool_arun():
mock_retriever = Mock(spec=Retriever)
chunk = Chunk(content="async content", similarity=0.8)
doc = Document(id="doc2", chunks=[chunk])
mock_retriever.query_relevant_documents.return_value = [doc]
resources = [Resource(uri="test://uri", title="Test")]
tool = RetrieverTool(retriever=mock_retriever, resources=resources)
mock_run_manager = Mock(spec=AsyncCallbackManagerForToolRun)
mock_sync_manager = Mock(spec=CallbackManagerForToolRun)
mock_run_manager.get_sync.return_value = mock_sync_manager
result = await tool._arun("async keywords", mock_run_manager)
mock_run_manager.get_sync.assert_called_once()
assert isinstance(result, list)
assert len(result) == 1
assert result[0] == doc.to_dict()
@patch("src.tools.retriever.build_retriever")
def test_get_retriever_tool_success(mock_build_retriever):
mock_retriever = Mock(spec=Retriever)
mock_build_retriever.return_value = mock_retriever
resources = [Resource(uri="test://uri", title="Test")]
tool = get_retriever_tool(resources)
assert isinstance(tool, RetrieverTool)
assert tool.retriever == mock_retriever
assert tool.resources == resources
def test_get_retriever_tool_empty_resources():
result = get_retriever_tool([])
assert result is None
@patch("src.tools.retriever.build_retriever")
def test_get_retriever_tool_no_retriever(mock_build_retriever):
mock_build_retriever.return_value = None
resources = [Resource(uri="test://uri", title="Test")]
result = get_retriever_tool(resources)
assert result is None
def test_retriever_tool_run_with_callback_manager():
mock_retriever = Mock(spec=Retriever)
mock_retriever.query_relevant_documents.return_value = []
resources = [Resource(uri="test://uri", title="Test")]
tool = RetrieverTool(retriever=mock_retriever, resources=resources)
mock_callback_manager = Mock(spec=CallbackManagerForToolRun)
result = tool._run("test keywords", mock_callback_manager)
assert result == "No results found from the local knowledge base."
+108
View File
@@ -0,0 +1,108 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import json
from src.utils.json_utils import repair_json_output
class TestRepairJsonOutput:
def test_valid_json_object(self):
"""Test with valid JSON object"""
content = '{"key": "value", "number": 123}'
result = repair_json_output(content)
expected = json.dumps({"key": "value", "number": 123}, ensure_ascii=False)
assert result == expected
def test_valid_json_array(self):
"""Test with valid JSON array"""
content = '[1, 2, 3, "test"]'
result = repair_json_output(content)
expected = json.dumps([1, 2, 3, "test"], ensure_ascii=False)
assert result == expected
def test_json_with_code_block_json(self):
"""Test JSON wrapped in ```json code block"""
content = '```json\n{"key": "value"}\n```'
result = repair_json_output(content)
expected = json.dumps({"key": "value"}, ensure_ascii=False)
assert result == expected
def test_json_with_code_block_ts(self):
"""Test JSON wrapped in ```ts code block"""
content = '```ts\n{"key": "value"}\n```'
result = repair_json_output(content)
expected = json.dumps({"key": "value"}, ensure_ascii=False)
assert result == expected
def test_malformed_json_repair(self):
"""Test with malformed JSON that can be repaired"""
content = '{"key": "value", "incomplete":'
result = repair_json_output(content)
# Should return repaired JSON
assert result.startswith('{"key": "value"')
def test_non_json_content(self):
"""Test with non-JSON content"""
content = "This is just plain text"
result = repair_json_output(content)
assert result == content
def test_empty_string(self):
"""Test with empty string"""
content = ""
result = repair_json_output(content)
assert result == ""
def test_whitespace_only(self):
"""Test with whitespace only"""
content = " \n\t "
result = repair_json_output(content)
assert result == ""
def test_json_with_unicode(self):
"""Test JSON with unicode characters"""
content = '{"name": "测试", "emoji": "🎯"}'
result = repair_json_output(content)
expected = json.dumps({"name": "测试", "emoji": "🎯"}, ensure_ascii=False)
assert result == expected
def test_json_code_block_without_closing(self):
"""Test JSON code block without closing```"""
content = '```json\n{"key": "value"}'
result = repair_json_output(content)
expected = json.dumps({"key": "value"}, ensure_ascii=False)
assert result == expected
def test_json_repair_broken_json(self):
"""Test exception handling when JSON repair fails"""
content = '{"this": "is", "completely": broken and unparseable'
expect = '{"this": "is", "completely": "broken and unparseable"}'
result = repair_json_output(content)
assert result == expect
def test_nested_json_object(self):
"""Test with nested JSON object"""
content = '{"outer": {"inner": {"deep": "value"}}}'
result = repair_json_output(content)
expected = json.dumps(
{"outer": {"inner": {"deep": "value"}}}, ensure_ascii=False
)
assert result == expected
def test_json_array_with_objects(self):
"""Test JSON array containing objects"""
content = '[{"id": 1, "name": "test1"}, {"id": 2, "name": "test2"}]'
result = repair_json_output(content)
expected = json.dumps(
[{"id": 1, "name": "test1"}, {"id": 2, "name": "test2"}], ensure_ascii=False
)
assert result == expected
def test_content_with_json_in_middle(self):
"""Test content that contains ```json in the middle"""
content = 'Some text before ```json {"key": "value"} and after'
result = repair_json_output(content)
# Should attempt to process as JSON since it contains ```json
assert isinstance(result, str)
assert result == '{"key": "value"}'
Generated
+1132 -996
View File
File diff suppressed because it is too large Load Diff
+130
View File
@@ -0,0 +1,130 @@
# 深度思考块功能实现总结
## 🎯 实现的功能
### 核心特性
1. **智能展示逻辑**: 深度思考过程初始展开,计划内容开始时自动折叠
2. **分阶段显示**: 思考阶段只显示思考块,思考结束后才显示计划卡片
3. **动态主题**: 思考阶段使用蓝色主题,完成后切换为默认主题
4. **流式支持**: 实时展示推理内容的流式传输
5. **优雅交互**: 平滑的动画效果和状态切换
### 交互流程
```
用户发送问题 (启用深度思考)
开始接收 reasoning_content
思考块自动展开 + primary 主题 + 加载动画
推理内容流式更新
开始接收 content (计划内容)
思考块自动折叠 + 主题切换
计划卡片优雅出现 (动画效果)
计划内容保持流式更新 (标题→思路→步骤)
完成 (用户可手动展开思考块)
```
## 🔧 技术实现
### 数据结构扩展
- `Message` 接口添加 `reasoningContent``reasoningContentChunks` 字段
- `MessageChunkEvent` 接口添加 `reasoning_content` 字段
- 消息合并逻辑支持推理内容的流式处理
### 组件架构
- `ThoughtBlock`: 可折叠的思考块组件
- `PlanCard`: 更新后的计划卡片,集成思考块
- 智能状态管理和条件渲染
### 状态管理
```typescript
// 关键状态逻辑
const hasMainContent = message.content && message.content.trim() !== "";
const isThinking = reasoningContent && !hasMainContent;
const shouldShowPlan = hasMainContent; // 有内容就显示,保持流式效果
```
### 自动折叠逻辑
```typescript
React.useEffect(() => {
if (hasMainContent && !hasAutoCollapsed) {
setIsOpen(false);
setHasAutoCollapsed(true);
}
}, [hasMainContent, hasAutoCollapsed]);
```
## 🎨 视觉设计
### 统一设计语言
- **字体系统**: 使用 `font-semibold` 与 CardTitle 保持一致
- **圆角规范**: 采用 `rounded-xl` 与其他卡片组件统一
- **间距标准**: 使用 `px-6 py-4` 内边距,`mb-6` 外边距
- **图标尺寸**: 18px 大脑图标,与文字比例协调
### 思考阶段样式
- Primary 主题色边框和背景
- Primary 色图标和文字
- 标准边框样式
- 加载动画
### 完成阶段样式
- 默认 border 和 card 背景
- muted-foreground 图标
- 80% 透明度文字
- 静态图标
### 动画效果
- 展开/折叠动画
- 主题切换过渡
- 颜色变化动画
## 📁 文件更改
### 核心文件
1. `web/src/core/messages/types.ts` - 消息类型扩展
2. `web/src/core/api/types.ts` - API 事件类型扩展
3. `web/src/core/messages/merge-message.ts` - 消息合并逻辑
4. `web/src/core/store/store.ts` - 状态管理更新
5. `web/src/app/chat/components/message-list-view.tsx` - 主要组件实现
### 测试和文档
1. `web/public/mock/reasoning-example.txt` - 测试数据
2. `web/docs/thought-block-feature.md` - 功能文档
3. `web/docs/testing-thought-block.md` - 测试指南
4. `web/docs/interaction-flow-test.md` - 交互流程测试
## 🧪 测试方法
### 快速测试
```
访问: http://localhost:3000?mock=reasoning-example
发送任意消息,观察交互流程
```
### 完整测试
1. 启用深度思考模式
2. 配置 reasoning 模型
3. 发送复杂问题
4. 验证完整交互流程
## 🔄 兼容性
- ✅ 向后兼容:无推理内容时正常显示
- ✅ 渐进增强:功能仅在有推理内容时激活
- ✅ 优雅降级:推理内容为空时不显示思考块
## 🚀 使用建议
1. **启用深度思考**: 点击"Deep Thinking"按钮
2. **观察流程**: 注意思考块的自动展开和折叠
3. **手动控制**: 可随时点击思考块标题栏控制展开/折叠
4. **查看推理**: 展开思考块查看完整的推理过程
这个实现完全满足了用户的需求,提供了直观、流畅的深度思考过程展示体验。
+112
View File
@@ -0,0 +1,112 @@
# 思考块交互流程测试
## 测试场景
### 场景 1: 完整的深度思考流程
**步骤**:
1. 启用深度思考模式
2. 发送问题:"什么是 vibe coding"
3. 观察交互流程
**预期行为**:
#### 阶段 1: 深度思考开始
- ✅ 思考块立即出现并展开
- ✅ 使用蓝色主题(边框、背景、图标、文字)
- ✅ 显示加载动画
- ✅ 不显示计划卡片
- ✅ 推理内容实时流式更新
#### 阶段 2: 思考过程中
- ✅ 思考块保持展开状态
- ✅ 蓝色主题持续显示
- ✅ 推理内容持续增加
- ✅ 加载动画持续显示
- ✅ 计划卡片仍然不显示
#### 阶段 3: 开始接收计划内容
- ✅ 思考块自动折叠
- ✅ 主题从 primary 切换为默认
- ✅ 加载动画消失
- ✅ 计划卡片以优雅动画出现(opacity: 0→1, y: 20→0
- ✅ 计划内容保持流式更新效果
#### 阶段 4: 计划流式输出
- ✅ 标题逐步显示
- ✅ 思路内容流式更新
- ✅ 步骤列表逐项显示
- ✅ 每个步骤的标题和描述分别流式渲染
#### 阶段 5: 计划完成
- ✅ 思考块保持折叠状态
- ✅ 计划卡片完全显示
- ✅ 用户可手动展开思考块查看推理过程
### 场景 2: 手动交互测试
**步骤**:
1. 在思考完成后,手动点击思考块
2. 验证展开/折叠功能
**预期行为**:
- ✅ 点击可正常展开/折叠
- ✅ 动画效果流畅
- ✅ 内容完整显示
- ✅ 不影响计划卡片显示
### 场景 3: 边界情况测试
#### 3.1 只有推理内容,没有计划内容
**预期**: 思考块保持展开,不显示计划卡片
#### 3.2 没有推理内容,只有计划内容
**预期**: 不显示思考块,直接显示计划卡片
#### 3.3 推理内容为空
**预期**: 不显示思考块,直接显示计划卡片
## 验证要点
### 视觉效果
- [ ] Primary 主题色在思考阶段正确显示
- [ ] 主题切换动画流畅
- [ ] 字体权重与 CardTitle 保持一致 (`font-semibold`)
- [ ] 圆角设计与其他卡片统一 (`rounded-xl`)
- [ ] 图标尺寸和颜色正确变化 (18px, primary/muted-foreground)
- [ ] 内边距与设计系统一致 (`px-6 py-4`)
- [ ] 整体视觉层次与页面协调
### 交互逻辑
- [ ] 自动展开/折叠时机正确
- [ ] 手动展开/折叠功能正常
- [ ] 计划卡片显示时机正确
- [ ] 加载动画显示时机正确
### 内容渲染
- [ ] 推理内容正确流式更新
- [ ] Markdown 格式正确渲染
- [ ] 中文内容正确显示
- [ ] 内容不丢失或重复
### 性能表现
- [ ] 动画流畅,无卡顿
- [ ] 内存使用正常
- [ ] 组件重新渲染次数合理
## 故障排除
### 思考块不自动折叠
1. 检查 `hasMainContent` 逻辑
2. 验证 `useEffect` 依赖项
3. 确认 `hasAutoCollapsed` 状态管理
### 计划卡片显示时机错误
1. 检查 `shouldShowPlan` 计算逻辑
2. 验证 `isThinking` 状态判断
3. 确认消息内容解析正确
### 主题切换异常
1. 检查 `isStreaming` 状态
2. 验证 CSS 类名应用
3. 确认条件渲染逻辑
+125
View File
@@ -0,0 +1,125 @@
# 流式输出优化改进
## 🎯 改进目标
确保在深度思考结束后,plan block 保持流式输出效果,提供更流畅丝滑的用户体验。
## 🔧 技术改进
### 状态逻辑优化
**之前的逻辑**:
```typescript
const isThinking = reasoningContent && (!hasMainContent || message.isStreaming);
const shouldShowPlan = hasMainContent && !isThinking;
```
**优化后的逻辑**:
```typescript
const isThinking = reasoningContent && !hasMainContent;
const shouldShowPlan = hasMainContent; // 简化逻辑,有内容就显示
```
### 关键改进点
1. **简化显示逻辑**: 只要有主要内容就显示 plan,不再依赖思考状态
2. **保持流式状态**: plan 组件的 `animated` 属性直接使用 `message.isStreaming`
3. **优雅入场动画**: 添加 motion.div 包装,提供平滑的出现效果
## 🎨 用户体验提升
### 流式输出效果
#### 思考阶段
- ✅ 推理内容实时流式更新
- ✅ 思考块保持展开状态
- ✅ Primary 主题色高亮显示
#### 计划阶段
- ✅ 计划卡片优雅出现(300ms 动画)
- ✅ 标题内容流式渲染
- ✅ 思路内容流式更新
- ✅ 步骤列表逐项显示
- ✅ 每个步骤的标题和描述分别流式渲染
### 动画效果
#### 计划卡片入场动画
```typescript
<motion.div
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3, ease: "easeOut" }}
>
```
#### 流式文本动画
- 所有 Markdown 组件都使用 `animated={message.isStreaming}`
- 确保文本逐字符或逐词显示效果
## 📊 性能优化
### 渲染优化
- **减少重新渲染**: 简化状态逻辑,减少不必要的组件重新挂载
- **保持组件实例**: plan 组件一旦出现就保持存在,避免重新创建
- **流式状态传递**: 直接使用消息的流式状态,避免额外的状态计算
### 内存优化
- **组件复用**: 避免频繁的组件销毁和重建
- **状态管理**: 简化状态依赖,减少内存占用
## 🧪 测试验证
### 流式效果验证
1. **思考阶段**: 推理内容应该逐步显示
2. **过渡阶段**: 计划卡片应该平滑出现
3. **计划阶段**: 所有计划内容应该保持流式效果
### 动画效果验证
1. **入场动画**: 计划卡片应该从下方滑入并淡入
2. **文本动画**: 所有文本内容应该有打字机效果
3. **状态切换**: 思考块折叠应该平滑自然
### 性能验证
1. **渲染次数**: 检查组件重新渲染频率
2. **内存使用**: 监控内存占用情况
3. **动画流畅度**: 确保 60fps 的动画效果
## 📝 使用示例
### 完整交互流程
```
1. 用户发送问题 (启用深度思考)
2. 思考块展开,推理内容流式显示
3. 开始接收计划内容
4. 思考块自动折叠
5. 计划卡片优雅出现 (动画效果)
6. 计划内容流式渲染:
- 标题逐步显示
- 思路内容流式更新
- 步骤列表逐项显示
7. 完成,用户可查看完整内容
```
## 🔄 兼容性
- ✅ **向后兼容**: 不影响现有的非深度思考模式
- ✅ **渐进增强**: 功能仅在有推理内容时激活
- ✅ **优雅降级**: 在不支持的环境中正常显示
## 🚀 效果总结
这次优化显著提升了用户体验:
1. **更流畅的过渡**: 从思考到计划的切换更加自然
2. **保持流式效果**: 计划内容保持了原有的流式输出特性
3. **视觉连贯性**: 整个过程的视觉效果更加连贯统一
4. **性能提升**: 减少了不必要的组件重新渲染
用户现在可以享受到完整的流式体验,从深度思考到计划展示都保持了一致的流畅感。
+78
View File
@@ -0,0 +1,78 @@
# 测试思考块功能
## 快速测试
### 方法 1: 使用模拟数据
1. 在浏览器中访问应用并添加 `?mock=reasoning-example` 参数
2. 发送任意消息
3. 观察计划卡片上方是否出现思考块
### 方法 2: 启用深度思考模式
1. 确保配置了 reasoning 模型(如 DeepSeek R1
2. 在聊天界面点击"Deep Thinking"按钮
3. 发送一个需要规划的问题
4. 观察是否出现思考块
## 预期行为
### 思考块外观
- 深度思考开始时自动展开显示
- 思考阶段使用 primary 主题色(边框、背景、文字、图标)
- 带有 18px 大脑图标和"深度思考过程"标题
- 使用 `font-semibold` 字体权重,与 CardTitle 保持一致
- `rounded-xl` 圆角设计,与其他卡片组件统一
- 标准的 `px-6 py-4` 内边距
### 交互行为
- 思考阶段:自动展开,蓝色高亮,显示加载动画
- 计划阶段:自动折叠,切换为默认主题
- 用户可随时手动展开/折叠
- 平滑的展开/折叠动画和主题切换
### 分阶段显示
- 思考阶段:只显示思考块,不显示计划卡片
- 计划阶段:思考块折叠,显示完整计划卡片
### 内容渲染
- 支持 Markdown 格式
- 中文内容正确显示
- 保持原有的换行和格式
## 故障排除
### 思考块不显示
1. 检查消息是否包含 `reasoningContent` 字段
2. 确认 `reasoning_content` 事件是否正确处理
3. 验证消息合并逻辑是否正常工作
### 内容显示异常
1. 检查 Markdown 渲染是否正常
2. 确认 CSS 样式是否正确加载
3. 验证动画效果是否启用
### 流式传输问题
1. 检查 WebSocket 连接状态
2. 确认事件流格式是否正确
3. 验证消息更新逻辑
## 开发调试
### 控制台检查
```javascript
// 检查消息对象
const messages = useStore.getState().messages;
const lastMessage = Array.from(messages.values()).pop();
console.log('Reasoning content:', lastMessage?.reasoningContent);
```
### 网络面板
- 查看 SSE 事件流
- 确认 `reasoning_content` 字段存在
- 检查事件格式是否正确
### React DevTools
- 检查 ThoughtBlock 组件状态
- 验证 props 传递是否正确
- 观察组件重新渲染情况
+155
View File
@@ -0,0 +1,155 @@
# 思考块设计系统规范
## 🎯 设计目标
确保思考块组件与整个应用的设计语言保持完全一致,提供统一的用户体验。
## 📐 设计规范
### 字体系统
```css
/* 标题字体 - 与 CardTitle 保持一致 */
font-weight: 600; /* font-semibold */
line-height: 1; /* leading-none */
```
### 尺寸规范
```css
/* 图标尺寸 */
icon-size: 18px; /* 与文字比例协调 */
/* 内边距 */
padding: 1.5rem; /* px-6 py-4 */
/* 外边距 */
margin-bottom: 1.5rem; /* mb-6 */
/* 圆角 */
border-radius: 0.75rem; /* rounded-xl */
```
### 颜色系统
#### 思考阶段(活跃状态)
```css
/* 边框和背景 */
border-color: hsl(var(--primary) / 0.2);
background-color: hsl(var(--primary) / 0.05);
/* 图标和文字 */
color: hsl(var(--primary));
/* 阴影 */
box-shadow: 0 1px 2px 0 rgb(0 0 0 / 0.05);
```
#### 完成阶段(静态状态)
```css
/* 边框和背景 */
border-color: hsl(var(--border));
background-color: hsl(var(--card));
/* 图标 */
color: hsl(var(--muted-foreground));
/* 文字 */
color: hsl(var(--foreground));
```
#### 内容区域
```css
/* 思考阶段 */
.prose-primary {
color: hsl(var(--primary));
}
/* 完成阶段 */
.opacity-80 {
opacity: 0.8;
}
```
### 交互状态
```css
/* 悬停状态 */
.hover\:bg-accent:hover {
background-color: hsl(var(--accent));
}
.hover\:text-accent-foreground:hover {
color: hsl(var(--accent-foreground));
}
```
## 🔄 状态变化
### 状态映射
| 状态 | 边框 | 背景 | 图标颜色 | 文字颜色 | 阴影 |
|------|------|------|----------|----------|------|
| 思考中 | primary/20 | primary/5 | primary | primary | 有 |
| 已完成 | border | card | muted-foreground | foreground | 无 |
### 动画过渡
```css
transition: all 200ms ease-in-out;
```
## 📱 响应式设计
### 间距适配
- 移动端:保持相同的内边距比例
- 桌面端:标准的 `px-6 py-4` 内边距
### 字体适配
- 所有设备:保持 `font-semibold` 字体权重
- 图标尺寸:固定 18px,确保清晰度
## 🎨 与现有组件的对比
### CardTitle 对比
| 属性 | CardTitle | ThoughtBlock |
|------|-----------|--------------|
| 字体权重 | font-semibold | font-semibold ✅ |
| 行高 | leading-none | leading-none ✅ |
| 颜色 | foreground | primary/foreground |
### Card 对比
| 属性 | Card | ThoughtBlock |
|------|------|--------------|
| 圆角 | rounded-lg | rounded-xl |
| 边框 | border | border ✅ |
| 背景 | card | card/primary ✅ |
### Button 对比
| 属性 | Button | ThoughtBlock Trigger |
|------|--------|---------------------|
| 内边距 | 标准 | px-6 py-4 ✅ |
| 悬停 | hover:bg-accent | hover:bg-accent ✅ |
| 圆角 | rounded-md | rounded-xl |
## ✅ 设计检查清单
### 视觉一致性
- [ ] 字体权重与 CardTitle 一致
- [ ] 圆角设计与卡片组件统一
- [ ] 颜色使用 CSS 变量系统
- [ ] 间距符合设计规范
### 交互一致性
- [ ] 悬停状态与 Button 组件一致
- [ ] 过渡动画时长统一(200ms)
- [ ] 状态变化平滑自然
### 可访问性
- [ ] 颜色对比度符合 WCAG 标准
- [ ] 图标尺寸适合点击/触摸
- [ ] 状态变化有明确的视觉反馈
## 🔧 实现要点
1. **使用设计系统变量**: 所有颜色都使用 CSS 变量,确保主题切换正常
2. **保持组件一致性**: 与现有 Card、Button 组件的样式保持一致
3. **响应式友好**: 在不同设备上都有良好的显示效果
4. **性能优化**: 使用 CSS 过渡而非 JavaScript 动画
这个设计系统确保了思考块组件与整个应用的视觉语言完全统一,提供了一致的用户体验。
+108
View File
@@ -0,0 +1,108 @@
# 思考块功能 (Thought Block Feature)
## 概述
思考块功能允许在计划卡片之前展示 AI 的深度思考过程,以可折叠的方式呈现推理内容。这个功能特别适用于启用深度思考模式时的场景。
## 功能特性
- **智能展示逻辑**: 深度思考过程初始展开,当开始接收计划内容时自动折叠
- **分阶段显示**: 思考阶段只显示思考块,思考结束后才显示计划卡片
- **流式支持**: 支持推理内容的实时流式展示
- **视觉状态反馈**: 思考阶段使用蓝色主题突出显示
- **优雅的动画**: 包含平滑的展开/折叠动画效果
- **响应式设计**: 适配不同屏幕尺寸
## 技术实现
### 数据结构更新
1. **Message 类型扩展**:
```typescript
export interface Message {
// ... 其他字段
reasoningContent?: string;
reasoningContentChunks?: string[];
}
```
2. **API 事件类型扩展**:
```typescript
export interface MessageChunkEvent {
// ... 其他字段
reasoning_content?: string;
}
```
### 组件结构
- **ThoughtBlock**: 主要的思考块组件
- 使用 Radix UI 的 Collapsible 组件
- 支持流式内容展示
- 包含加载动画和状态指示
- **PlanCard**: 更新后的计划卡片
- 在计划内容之前展示思考块
- 自动检测是否有推理内容
### 消息处理
消息合并逻辑已更新以支持 `reasoning_content` 字段的流式处理:
```typescript
function mergeTextMessage(message: Message, event: MessageChunkEvent) {
// 处理常规内容
if (event.data.content) {
message.content += event.data.content;
message.contentChunks.push(event.data.content);
}
// 处理推理内容
if (event.data.reasoning_content) {
message.reasoningContent = (message.reasoningContent || "") + event.data.reasoning_content;
message.reasoningContentChunks = message.reasoningContentChunks || [];
message.reasoningContentChunks.push(event.data.reasoning_content);
}
}
```
## 使用方法
### 启用深度思考模式
1. 在聊天界面中,点击"Deep Thinking"按钮
2. 确保配置了支持推理的模型
3. 发送消息后,如果有推理内容,会在计划卡片上方显示思考块
### 查看推理过程
1. 深度思考开始时,思考块自动展开显示
2. 思考阶段使用 primary 主题色,突出显示正在进行的推理过程
3. 推理内容支持 Markdown 格式渲染,实时流式更新
4. 在流式传输过程中会显示加载动画
5. 当开始接收计划内容时,思考块自动折叠
6. 计划卡片以优雅的动画效果出现
7. 计划内容保持流式输出效果,逐步显示标题、思路和步骤
8. 用户可以随时点击思考块标题栏手动展开/折叠
## 样式特性
- **统一设计语言**: 与页面整体设计风格保持一致
- **字体层次**: 使用与 CardTitle 相同的 `font-semibold` 字体权重
- **圆角设计**: 采用 `rounded-xl` 与其他卡片组件保持一致
- **间距规范**: 使用标准的 `px-6 py-4` 内边距
- **动态主题**: 思考阶段使用 primary 色彩系统
- **图标尺寸**: 18px 图标尺寸,与文字比例协调
- **状态反馈**: 流式传输时显示加载动画和主题色高亮
- **交互反馈**: 标准的 hover 和 focus 状态
- **平滑过渡**: 所有状态变化都有平滑的过渡动画
## 测试数据
可以使用 `/mock/reasoning-example.txt` 文件测试思考块功能,该文件包含了模拟的推理内容和计划数据。
## 兼容性
- 向后兼容:没有推理内容的消息不会显示思考块
- 渐进增强:功能仅在有推理内容时激活
- 优雅降级:如果推理内容为空,组件不会渲染
+229
View File
@@ -0,0 +1,229 @@
{
"common": {
"cancel": "Cancel",
"save": "Save",
"settings": "Settings",
"getStarted": "Get Started",
"learnMore": "Learn More",
"starOnGitHub": "Star on GitHub",
"send": "Send",
"stop": "Stop",
"linkNotReliable": "This link might be a hallucination from AI model and may not be reliable.",
"noResult": "No result"
},
"messageInput": {
"placeholder": "What can I do for you?",
"placeholderWithRag": "What can I do for you? \nYou may refer to RAG resources by using @."
},
"header": {
"title": "DeerFlow"
},
"hero": {
"title": "Deep Research",
"subtitle": "at Your Fingertips",
"description": "Meet DeerFlow, your personal Deep Research assistant. With powerful tools like search engines, web crawlers, Python and MCP services, it delivers instant insights, comprehensive reports, and even captivating podcasts.",
"footnote": "* DEER stands for Deep Exploration and Efficient Research."
},
"settings": {
"title": "DeerFlow Settings",
"description": "Manage your DeerFlow settings here.",
"addServers": "Add Servers",
"cancel": "Cancel",
"addNewMCPServers": "Add New MCP Servers",
"mcpConfigDescription": "DeerFlow uses the standard JSON MCP config to create a new server.",
"pasteConfigBelow": "Paste your config below and click \"Add\" to add new servers.",
"add": "Add",
"general": {
"title": "General",
"autoAcceptPlan": "Allow automatic acceptance of plans",
"maxPlanIterations": "Max plan iterations",
"maxPlanIterationsDescription": "Set to 1 for single-step planning. Set to 2 or more to enable re-planning.",
"maxStepsOfPlan": "Max steps of a research plan",
"maxStepsDescription": "By default, each research plan has 3 steps.",
"maxSearchResults": "Max search results",
"maxSearchResultsDescription": "By default, each search step has 3 results."
},
"mcp": {
"title": "MCP Servers",
"description": "The Model Context Protocol boosts DeerFlow by integrating external tools for tasks like private domain searches, web browsing, food ordering, and more. Click here to",
"learnMore": "learn more about MCP.",
"enableDisable": "Enable/disable server",
"deleteServer": "Delete server",
"disabled": "Disabled",
"new": "New"
},
"about": {
"title": "About"
},
"reportStyle": {
"writingStyle": "Writing Style",
"chooseTitle": "Choose Writing Style",
"chooseDesc": "Select the writing style for your research reports. Different styles are optimized for different audiences and purposes.",
"academic": "Academic",
"academicDesc": "Formal, objective, and analytical with precise terminology",
"popularScience": "Popular Science",
"popularScienceDesc": "Engaging and accessible for general audience",
"news": "News",
"newsDesc": "Factual, concise, and impartial journalistic style",
"socialMedia": "Social Media",
"socialMediaDesc": "Concise, attention-grabbing, and shareable"
}
},
"footer": {
"quote": "Originated from Open Source, give back to Open Source.",
"license": "Licensed under MIT License",
"copyright": "DeerFlow"
},
"chat": {
"page": {
"loading": "Loading DeerFlow...",
"welcomeUser": "Welcome, {username}",
"starOnGitHub": "Star DeerFlow on GitHub"
},
"welcome": {
"greeting": "👋 Hello, there!",
"description": "Welcome to 🦌 DeerFlow, a deep research assistant built on cutting-edge language models, helps you search on web, browse information, and handle complex tasks."
},
"conversationStarters": [
"How many times taller is the Eiffel Tower than the tallest building in the world?",
"How many years does an average Tesla battery last compared to a gasoline engine?",
"How many liters of water are required to produce 1 kg of beef?",
"How many times faster is the speed of light compared to the speed of sound?"
],
"inputBox": {
"deepThinking": "Deep Thinking",
"deepThinkingTooltip": {
"title": "Deep Thinking Mode: {status}",
"description": "When enabled, DeerFlow will use reasoning model ({model}) to generate more thoughtful plans."
},
"investigation": "Investigation",
"investigationTooltip": {
"title": "Investigation Mode: {status}",
"description": "When enabled, DeerFlow will perform a quick search before planning. This is useful for researches related to ongoing events and news."
},
"enhancePrompt": "Enhance prompt with AI",
"on": "On",
"off": "Off"
},
"research": {
"deepResearch": "Deep Research",
"researching": "Researching...",
"generatingReport": "Generating report...",
"reportGenerated": "Report generated",
"open": "Open",
"close": "Close",
"deepThinking": "Deep Thinking",
"report": "Report",
"activities": "Activities",
"generatePodcast": "Generate podcast",
"edit": "Edit",
"copy": "Copy",
"downloadReport": "Download report as markdown",
"searchingFor": "Searching for",
"reading": "Reading",
"runningPythonCode": "Running Python code",
"errorExecutingCode": "Error when executing the above code",
"executionOutput": "Execution output",
"retrievingDocuments": "Retrieving documents from RAG",
"running": "Running",
"generatingPodcast": "Generating podcast...",
"nowPlayingPodcast": "Now playing podcast...",
"podcast": "Podcast",
"errorGeneratingPodcast": "Error when generating podcast. Please try again.",
"downloadPodcast": "Download podcast"
},
"messages": {
"replaying": "Replaying",
"replayDescription": "DeerFlow is now replaying the conversation...",
"replayHasStopped": "The replay has been stopped.",
"replayModeDescription": "You're now in DeerFlow's replay mode. Click the \"Play\" button on the right to start.",
"play": "Play",
"fastForward": "Fast Forward",
"demoNotice": "* This site is for demo purposes only. If you want to try your own question, please",
"clickHere": "click here",
"cloneLocally": "to clone it locally and run it."
},
"multiAgent": {
"moveToPrevious": "Move to the previous step",
"playPause": "Play / Pause",
"moveToNext": "Move to the next step",
"toggleFullscreen": "Toggle fullscreen"
}
},
"landing": {
"caseStudies": {
"title": "Case Studies",
"description": "See DeerFlow in action through replays.",
"clickToWatch": "Click to watch replay",
"cases": [
{
"title": "How tall is Eiffel Tower compared to tallest building?",
"description": "The research compares the heights and global significance of the Eiffel Tower and Burj Khalifa, and uses Python code to calculate the multiples."
},
{
"title": "What are the top trending repositories on GitHub?",
"description": "The research utilized MCP services to identify the most popular GitHub repositories and documented them in detail using search engines."
},
{
"title": "Write an article about Nanjing's traditional dishes",
"description": "The study vividly showcases Nanjing's famous dishes through rich content and imagery, uncovering their hidden histories and cultural significance."
},
{
"title": "How to decorate a small rental apartment?",
"description": "The study provides readers with practical and straightforward methods for decorating apartments, accompanied by inspiring images."
},
{
"title": "Introduce the movie 'Léon: The Professional'",
"description": "The research provides a comprehensive introduction to the movie 'Léon: The Professional', including its plot, characters, and themes."
},
{
"title": "How do you view the takeaway war in China? (in Chinese)",
"description": "The research analyzes the intensifying competition between JD and Meituan, highlighting their strategies, technological innovations, and challenges."
},
{
"title": "Are ultra-processed foods linked to health?",
"description": "The research examines the health risks of rising ultra-processed food consumption, urging more research on long-term effects and individual differences."
},
{
"title": "Write an article on \"Would you insure your AI twin?\"",
"description": "The research explores the concept of insuring AI twins, highlighting their benefits, risks, ethical considerations, and the evolving regulatory."
}
]
},
"coreFeatures": {
"title": "Core Features",
"description": "Find out what makes DeerFlow effective.",
"features": [
{
"name": "Dive Deeper and Reach Wider",
"description": "Unlock deeper insights with advanced tools. Our powerful search + crawling and Python tools gathers comprehensive data, delivering in-depth reports to enhance your study."
},
{
"name": "Human-in-the-loop",
"description": "Refine your research plan, or adjust focus areas all through simple natural language."
},
{
"name": "Lang Stack",
"description": "Build with confidence using the LangChain and LangGraph frameworks."
},
{
"name": "MCP Integrations",
"description": "Supercharge your research workflow and expand your toolkit with seamless MCP integrations."
},
{
"name": "Podcast Generation",
"description": "Instantly generate podcasts from reports. Perfect for on-the-go learning or sharing findings effortlessly."
}
]
},
"multiAgent": {
"title": "Multi-Agent Architecture",
"description": "Experience the agent teamwork with our Supervisor + Handoffs design pattern."
},
"joinCommunity": {
"title": "Join the DeerFlow Community",
"description": "Contribute brilliant ideas to shape the future of DeerFlow. Collaborate, innovate, and make impacts.",
"contributeNow": "Contribute Now"
}
}
}
+229
View File
@@ -0,0 +1,229 @@
{
"common": {
"cancel": "取消",
"save": "保存",
"settings": "设置",
"getStarted": "开始使用",
"learnMore": "了解更多",
"starOnGitHub": "在 GitHub 上点赞",
"send": "发送",
"stop": "停止",
"linkNotReliable": "此链接可能是 AI 生成的幻觉,可能并不可靠。",
"noResult": "无结果"
},
"messageInput": {
"placeholder": "我能帮你做什么?",
"placeholderWithRag": "我能帮你做什么?\n你可以通过 @ 引用 RAG 资源。"
},
"header": {
"title": "DeerFlow"
},
"hero": {
"title": "深度研究",
"subtitle": "触手可及",
"description": "认识 DeerFlow,您的个人深度研究助手。凭借搜索引擎、网络爬虫、Python 和 MCP 服务等强大工具,它能提供即时洞察、全面报告,甚至制作引人入胜的播客。",
"footnote": "* DEER 代表深度探索和高效研究。"
},
"settings": {
"title": "DeerFlow 设置",
"description": "在这里管理您的 DeerFlow 设置。",
"cancel": "取消",
"addServers": "添加服务器",
"addNewMCPServers": "添加新的 MCP 服务器",
"mcpConfigDescription": "DeerFlow 使用标准 JSON MCP 配置来创建新服务器。",
"pasteConfigBelow": "将您的配置粘贴到下面,然后点击\"添加\"来添加新服务器。",
"add": "添加",
"general": {
"title": "通用",
"autoAcceptPlan": "允许自动接受计划",
"maxPlanIterations": "最大计划迭代次数",
"maxPlanIterationsDescription": "设置为 1 进行单步规划。设置为 2 或更多以启用重新规划。",
"maxStepsOfPlan": "研究计划的最大步骤数",
"maxStepsDescription": "默认情况下,每个研究计划有 3 个步骤。",
"maxSearchResults": "最大搜索结果数",
"maxSearchResultsDescription": "默认情况下,每个搜索步骤有 3 个结果。"
},
"mcp": {
"title": "MCP 服务器",
"description": "模型上下文协议通过集成外部工具来增强 DeerFlow,用于私域搜索、网页浏览、订餐等任务。点击这里",
"learnMore": "了解更多关于 MCP 的信息。",
"enableDisable": "启用/禁用服务器",
"deleteServer": "删除服务器",
"disabled": "已禁用",
"new": "新增"
},
"about": {
"title": "关于"
},
"reportStyle": {
"writingStyle": "写作风格",
"chooseTitle": "选择写作风格",
"chooseDesc": "请选择您的研究报告的写作风格。不同风格适用于不同受众和用途。",
"academic": "学术",
"academicDesc": "正式、客观、分析性强,术语精确",
"popularScience": "科普",
"popularScienceDesc": "生动有趣,适合大众阅读",
"news": "新闻",
"newsDesc": "事实、简明、公正的新闻风格",
"socialMedia": "社交媒体",
"socialMediaDesc": "简洁有趣,易于传播"
}
},
"footer": {
"quote": "源于开源,回馈开源。",
"license": "基于 MIT 许可证授权",
"copyright": "DeerFlow"
},
"chat": {
"page": {
"loading": "正在加载 DeerFlow...",
"welcomeUser": "欢迎,{username}",
"starOnGitHub": "在 GitHub 上点赞"
},
"welcome": {
"greeting": "👋 你好!",
"description": "欢迎来到 🦌 DeerFlow,一个基于前沿语言模型构建的深度研究助手,帮助您搜索网络、浏览信息并处理复杂任务。"
},
"conversationStarters": [
"埃菲尔铁塔比世界最高建筑高多少倍?",
"特斯拉电池的平均寿命比汽油发动机长多少年?",
"生产1公斤牛肉需要多少升水?",
"光速比声速快多少倍?"
],
"inputBox": {
"deepThinking": "深度思考",
"deepThinkingTooltip": {
"title": "深度思考模式:{status}",
"description": "启用后,DeerFlow 将使用推理模型({model})生成更深思熟虑的计划。"
},
"investigation": "调研",
"investigationTooltip": {
"title": "调研模式:{status}",
"description": "启用后,DeerFlow 将在规划前进行快速搜索。这对于与时事和新闻相关的研究很有用。"
},
"enhancePrompt": "用 AI 增强提示",
"on": "开启",
"off": "关闭"
},
"research": {
"deepResearch": "深度研究",
"researching": "研究中...",
"generatingReport": "生成报告中...",
"reportGenerated": "报告已生成",
"open": "打开",
"close": "关闭",
"deepThinking": "深度思考",
"report": "报告",
"activities": "活动",
"generatePodcast": "生成播客",
"edit": "编辑",
"copy": "复制",
"downloadReport": "下载报告为 Markdown",
"searchingFor": "搜索",
"reading": "阅读中",
"runningPythonCode": "运行 Python 代码",
"errorExecutingCode": "执行上述代码时出错",
"executionOutput": "执行输出",
"retrievingDocuments": "从 RAG 检索文档",
"running": "运行",
"generatingPodcast": "生成播客中...",
"nowPlayingPodcast": "正在播放播客...",
"podcast": "播客",
"errorGeneratingPodcast": "生成播客时出错。请重试。",
"downloadPodcast": "下载播客"
},
"messages": {
"replaying": "回放中",
"replayDescription": "DeerFlow 正在回放对话...",
"replayHasStopped": "回放已停止。",
"replayModeDescription": "您现在处于 DeerFlow 的回放模式。点击右侧的\"播放\"按钮开始。",
"play": "播放",
"fastForward": "快进",
"demoNotice": "* 此网站仅用于演示目的。如果您想尝试自己的问题,请",
"clickHere": "点击这里",
"cloneLocally": "在本地克隆并运行它。"
},
"multiAgent": {
"moveToPrevious": "移动到上一步",
"playPause": "播放 / 暂停",
"moveToNext": "移动到下一步",
"toggleFullscreen": "切换全屏"
}
},
"landing": {
"caseStudies": {
"title": "案例研究",
"description": "通过回放查看 DeerFlow 的实际应用。",
"clickToWatch": "点击观看回放",
"cases": [
{
"title": "埃菲尔铁塔与最高建筑相比有多高?",
"description": "该研究比较了埃菲尔铁塔和哈利法塔的高度和全球意义,并使用 Python 代码计算倍数。"
},
{
"title": "GitHub 上最热门的仓库有哪些?",
"description": "该研究利用 MCP 服务识别最受欢迎的 GitHub 仓库,并使用搜索引擎详细记录它们。"
},
{
"title": "写一篇关于南京传统菜肴的文章",
"description": "该研究通过丰富的内容和图像生动地展示了南京的著名菜肴,揭示了它们隐藏的历史和文化意义。"
},
{
"title": "如何装饰小型出租公寓?",
"description": "该研究为读者提供了实用而直接的公寓装饰方法,并配有鼓舞人心的图像。"
},
{
"title": "介绍电影《这个杀手不太冷》",
"description": "该研究全面介绍了电影《这个杀手不太冷》,包括其情节、角色和主题。"
},
{
"title": "你如何看待中国的外卖大战?(中文)",
"description": "该研究分析了京东和美团之间日益激烈的竞争,突出了它们的策略、技术创新和挑战。"
},
{
"title": "超加工食品与健康有关吗?",
"description": "该研究检查了超加工食品消费增加的健康风险,敦促对长期影响和个体差异进行更多研究。"
},
{
"title": "写一篇关于\"你会为你的 AI 双胞胎投保吗?\"的文章",
"description": "该研究探讨了为 AI 双胞胎投保的概念,突出了它们的好处、风险、伦理考虑和不断发展的监管。"
}
]
},
"coreFeatures": {
"title": "核心功能",
"description": "了解是什么让 DeerFlow 如此有效。",
"features": [
{
"name": "深入挖掘,触及更广",
"description": "使用高级工具解锁更深层的洞察。我们强大的搜索+爬取和 Python 工具收集全面的数据,提供深入的报告来增强您的研究。"
},
{
"name": "人机协作",
"description": "通过简单的自然语言完善您的研究计划或调整重点领域。"
},
{
"name": "Lang 技术栈",
"description": "使用 LangChain 和 LangGraph 框架自信地构建。"
},
{
"name": "MCP 集成",
"description": "通过无缝的 MCP 集成增强您的研究工作流程并扩展您的工具包。"
},
{
"name": "播客生成",
"description": "从报告中即时生成播客。非常适合移动学习或轻松分享发现。"
}
]
},
"multiAgent": {
"title": "多智能体架构",
"description": "通过我们的监督者 + 交接设计模式体验智能体团队合作。"
},
"joinCommunity": {
"title": "加入 DeerFlow 社区",
"description": "贡献精彩想法,塑造 DeerFlow 的未来。协作、创新并产生影响。",
"contributeNow": "立即贡献"
}
}
}
+4 -1
View File
@@ -6,6 +6,9 @@
// SPDX-License-Identifier: MIT
import "./src/env.js";
import createNextIntlPlugin from 'next-intl/plugin';
const withNextIntl = createNextIntlPlugin('./src/i18n.ts');
/** @type {import("next").NextConfig} */
@@ -39,4 +42,4 @@ const config = {
output: "standalone",
};
export default config;
export default withNextIntl(config);
+3
View File
@@ -46,6 +46,7 @@
"@tiptap/extension-table-row": "^2.11.7",
"@tiptap/extension-text": "^2.12.0",
"@tiptap/react": "^2.11.7",
"@types/js-cookie": "^3.0.6",
"@xyflow/react": "^12.6.0",
"best-effort-json-parser": "^1.1.3",
"class-variance-authority": "^0.7.1",
@@ -55,6 +56,7 @@
"hast": "^1.0.0",
"highlight.js": "^11.11.1",
"immer": "^10.1.1",
"js-cookie": "^3.0.5",
"katex": "^0.16.21",
"lowlight": "^3.3.0",
"lru-cache": "^11.1.0",
@@ -62,6 +64,7 @@
"motion": "^12.7.4",
"nanoid": "^5.1.5",
"next": "^15.2.3",
"next-intl": "^4.3.1",
"next-themes": "^0.4.6",
"novel": "^1.0.2",
"react": "^19.0.0",
+126
View File
@@ -95,6 +95,9 @@ importers:
'@tiptap/react':
specifier: ^2.11.7
version: 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@types/js-cookie':
specifier: ^3.0.6
version: 3.0.6
'@xyflow/react':
specifier: ^12.6.0
version: 12.6.0(@types/react@19.1.2)(immer@10.1.1)(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
@@ -122,6 +125,9 @@ importers:
immer:
specifier: ^10.1.1
version: 10.1.1
js-cookie:
specifier: ^3.0.5
version: 3.0.5
katex:
specifier: ^0.16.21
version: 0.16.21
@@ -143,6 +149,9 @@ importers:
next:
specifier: ^15.2.3
version: 15.3.0(@opentelemetry/api@1.9.0)(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
next-intl:
specifier: ^4.3.1
version: 4.3.1(next@15.3.0(@opentelemetry/api@1.9.0)(react-dom@19.1.0(react@19.1.0))(react@19.1.0))(react@19.1.0)(typescript@5.8.3)
next-themes:
specifier: ^0.4.6
version: 0.4.6(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
@@ -367,6 +376,24 @@ packages:
'@floating-ui/utils@0.2.9':
resolution: {integrity: sha512-MDWhGtE+eHw5JW7lq4qhc5yRLS11ERl1c7Z6Xd0a58DozHES6EnNNwUWbMiG4J9Cgj053Bhk8zvlhFYKVhULwg==}
'@formatjs/ecma402-abstract@2.3.4':
resolution: {integrity: sha512-qrycXDeaORzIqNhBOx0btnhpD1c+/qFIHAN9znofuMJX6QBwtbrmlpWfD4oiUUD2vJUOIYFA/gYtg2KAMGG7sA==}
'@formatjs/fast-memoize@2.2.7':
resolution: {integrity: sha512-Yabmi9nSvyOMrlSeGGWDiH7rf3a7sIwplbvo/dlz9WCIjzIQAfy1RMf4S0X3yG724n5Ghu2GmEl5NJIV6O9sZQ==}
'@formatjs/icu-messageformat-parser@2.11.2':
resolution: {integrity: sha512-AfiMi5NOSo2TQImsYAg8UYddsNJ/vUEv/HaNqiFjnI3ZFfWihUtD5QtuX6kHl8+H+d3qvnE/3HZrfzgdWpsLNA==}
'@formatjs/icu-skeleton-parser@1.8.14':
resolution: {integrity: sha512-i4q4V4qslThK4Ig8SxyD76cp3+QJ3sAqr7f6q9VVfeGtxG9OhiAk3y9XF6Q41OymsKzsGQ6OQQoJNY4/lI8TcQ==}
'@formatjs/intl-localematcher@0.5.10':
resolution: {integrity: sha512-af3qATX+m4Rnd9+wHcjJ4w2ijq+rAVP3CCinJQvFv1kgSu1W6jypUmvleJxcewdxmutM8dmIRZFxO/IQBZmP2Q==}
'@formatjs/intl-localematcher@0.6.1':
resolution: {integrity: sha512-ePEgLgVCqi2BBFnTMWPfIghu6FkbZnnBVhO2sSxvLfrdFw7wCHAHiDoM2h4NRgjbaY7+B7HgOLZGkK187pZTZg==}
'@hookform/resolvers@5.0.1':
resolution: {integrity: sha512-u/+Jp83luQNx9AdyW2fIPGY6Y7NG68eN2ZW8FOJYL+M0i4s49+refdJdOp/A9n9HFQtQs3HIDHQvX3ZET2o7YA==}
peerDependencies:
@@ -1285,6 +1312,9 @@ packages:
'@scena/matrix@1.1.1':
resolution: {integrity: sha512-JVKBhN0tm2Srl+Yt+Ywqu0oLgLcdemDQlD1OxmN9jaCTwaFPZ7tY8n6dhVgMEaR9qcR7r+kAlMXnSfNyYdE+Vg==}
'@schummar/icu-type-parser@1.21.5':
resolution: {integrity: sha512-bXHSaW5jRTmke9Vd0h5P7BtWZG9Znqb8gSDxZnxaGSJnGwPLDPfS+3g0BKzeWqzgZPsIVZkM7m2tbo18cm5HBw==}
'@standard-schema/utils@0.3.0':
resolution: {integrity: sha512-e7Mew686owMaPJVNNLs55PUvgz371nKgwsc4vxE49zsODpJEnxgxRo2y/OKrqueavXgZNMDVj3DdHFlaSAeU8g==}
@@ -1680,6 +1710,9 @@ packages:
'@types/hast@3.0.4':
resolution: {integrity: sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==}
'@types/js-cookie@3.0.6':
resolution: {integrity: sha512-wkw9yd1kEXOPnvEeEV1Go1MmxtBJL0RR79aOTAApecWFVu7w0NNXNqhcWgvw2YgZDYadliXkl14pa3WXw5jlCQ==}
'@types/json-schema@7.0.15':
resolution: {integrity: sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==}
@@ -2264,6 +2297,9 @@ packages:
supports-color:
optional: true
decimal.js@10.5.0:
resolution: {integrity: sha512-8vDa8Qxvr/+d94hSh5P3IJwI5t8/c0KsMp+g8bNw9cY2icONa5aPfvKeieW1WlG0WQYwwhJ7mjui2xtiePQSXw==}
decode-named-character-reference@1.1.0:
resolution: {integrity: sha512-Wy+JTSbFThEOXQIR2L6mxJvEs+veIzpmqD7ynWxMXGpnk3smkHQOp6forLdHsKpAMW9iJpaBBIxz285t1n1C3w==}
@@ -2752,6 +2788,9 @@ packages:
resolution: {integrity: sha512-4gd7VpWNQNB4UKKCFFVcp1AVv+FMOgs9NKzjHKusc8jTMhd5eL1NqQqOpE0KzMds804/yHlglp3uxgluOqAPLw==}
engines: {node: '>= 0.4'}
intl-messageformat@10.7.16:
resolution: {integrity: sha512-UmdmHUmp5CIKKjSoE10la5yfU+AYJAaiYLsodbjL4lji83JNvgOQUjGaGhGrpFCb0Uh7sl7qfP1IyILa8Z40ug==}
is-alphabetical@1.0.4:
resolution: {integrity: sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg==}
@@ -2912,6 +2951,10 @@ packages:
react:
optional: true
js-cookie@3.0.5:
resolution: {integrity: sha512-cEiJEAEoIbWfCZYKWhVwFuvPX1gETRYPw6LlaTKoxD3s2AkXzkCjnp6h0V77ozyqj0jakteJ4YqDJT830+lVGw==}
engines: {node: '>=14'}
js-tokens@4.0.0:
resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==}
@@ -3305,9 +3348,23 @@ packages:
natural-compare@1.4.0:
resolution: {integrity: sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==}
negotiator@1.0.0:
resolution: {integrity: sha512-8Ofs/AUQh8MaEcrlq5xOX0CQ9ypTF5dl78mjlMNfOK08fzpgTHQRQPBxcPlEtIw0yRpws+Zo/3r+5WRby7u3Gg==}
engines: {node: '>= 0.6'}
neo-async@2.6.2:
resolution: {integrity: sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw==}
next-intl@4.3.1:
resolution: {integrity: sha512-FylHpOoQw5MpOyJt4cw8pNEGba7r3jKDSqt112fmBqXVceGR5YncmqpxS5MvSHsWRwbjqpOV8OsZCIY/4f4HWg==}
peerDependencies:
next: ^12.0.0 || ^13.0.0 || ^14.0.0 || ^15.0.0
react: ^16.8.0 || ^17.0.0 || ^18.0.0 || >=19.0.0-rc <19.0.0 || ^19.0.0
typescript: ^5.0.0
peerDependenciesMeta:
typescript:
optional: true
next-themes@0.4.6:
resolution: {integrity: sha512-pZvgD5L0IEvX5/9GWyHMf3m8BKiVQwsCMHfoFosXtXBMnaS0ZnIJ9ST4b4NqLVKDEm8QBxoNNGNaBv2JNF6XNA==}
peerDependencies:
@@ -4128,6 +4185,11 @@ packages:
peerDependencies:
react: '*'
use-intl@4.3.1:
resolution: {integrity: sha512-8Xn5RXzeHZhWqqZimi1wi2pKFqm0NxRUOB41k1QdjbPX+ysoeLW3Ey+fi603D/e5EGb0fYw8WzjgtUagJdlIvg==}
peerDependencies:
react: ^17.0.0 || ^18.0.0 || >=19.0.0-rc <19.0.0 || ^19.0.0
use-sidecar@1.1.3:
resolution: {integrity: sha512-Fedw0aZvkhynoPYlA5WXrMCAMm+nSWdZt6lzJQ7Ok8S6Q+VsHmHpRWndVRJ8Be0ZbkfPc5LRYH+5XrzXcEeLRQ==}
engines: {node: '>=10'}
@@ -4378,6 +4440,36 @@ snapshots:
'@floating-ui/utils@0.2.9': {}
'@formatjs/ecma402-abstract@2.3.4':
dependencies:
'@formatjs/fast-memoize': 2.2.7
'@formatjs/intl-localematcher': 0.6.1
decimal.js: 10.5.0
tslib: 2.8.1
'@formatjs/fast-memoize@2.2.7':
dependencies:
tslib: 2.8.1
'@formatjs/icu-messageformat-parser@2.11.2':
dependencies:
'@formatjs/ecma402-abstract': 2.3.4
'@formatjs/icu-skeleton-parser': 1.8.14
tslib: 2.8.1
'@formatjs/icu-skeleton-parser@1.8.14':
dependencies:
'@formatjs/ecma402-abstract': 2.3.4
tslib: 2.8.1
'@formatjs/intl-localematcher@0.5.10':
dependencies:
tslib: 2.8.1
'@formatjs/intl-localematcher@0.6.1':
dependencies:
tslib: 2.8.1
'@hookform/resolvers@5.0.1(react-hook-form@7.56.1(react@19.1.0))':
dependencies:
'@standard-schema/utils': 0.3.0
@@ -5248,6 +5340,8 @@ snapshots:
dependencies:
'@daybrush/utils': 1.13.0
'@schummar/icu-type-parser@1.21.5': {}
'@standard-schema/utils@0.3.0': {}
'@swc/counter@0.1.3': {}
@@ -5633,6 +5727,8 @@ snapshots:
dependencies:
'@types/unist': 3.0.3
'@types/js-cookie@3.0.6': {}
'@types/json-schema@7.0.15': {}
'@types/json5@0.0.29': {}
@@ -6256,6 +6352,8 @@ snapshots:
dependencies:
ms: 2.1.3
decimal.js@10.5.0: {}
decode-named-character-reference@1.1.0:
dependencies:
character-entities: 2.0.2
@@ -6919,6 +7017,13 @@ snapshots:
hasown: 2.0.2
side-channel: 1.1.0
intl-messageformat@10.7.16:
dependencies:
'@formatjs/ecma402-abstract': 2.3.4
'@formatjs/fast-memoize': 2.2.7
'@formatjs/icu-messageformat-parser': 2.11.2
tslib: 2.8.1
is-alphabetical@1.0.4: {}
is-alphabetical@2.0.1: {}
@@ -7081,6 +7186,8 @@ snapshots:
'@types/react': 19.1.2
react: 19.1.0
js-cookie@3.0.5: {}
js-tokens@4.0.0: {}
js-yaml@4.1.0:
@@ -7660,8 +7767,20 @@ snapshots:
natural-compare@1.4.0: {}
negotiator@1.0.0: {}
neo-async@2.6.2: {}
next-intl@4.3.1(next@15.3.0(@opentelemetry/api@1.9.0)(react-dom@19.1.0(react@19.1.0))(react@19.1.0))(react@19.1.0)(typescript@5.8.3):
dependencies:
'@formatjs/intl-localematcher': 0.5.10
negotiator: 1.0.0
next: 15.3.0(@opentelemetry/api@1.9.0)(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
react: 19.1.0
use-intl: 4.3.1(react@19.1.0)
optionalDependencies:
typescript: 5.8.3
next-themes@0.4.6(react-dom@19.1.0(react@19.1.0))(react@19.1.0):
dependencies:
react: 19.1.0
@@ -8737,6 +8856,13 @@ snapshots:
dependencies:
react: 19.1.0
use-intl@4.3.1(react@19.1.0):
dependencies:
'@formatjs/fast-memoize': 2.2.7
'@schummar/icu-type-parser': 1.21.5
intl-messageformat: 10.7.16
react: 19.1.0
use-sidecar@1.1.3(@types/react@19.1.2)(react@19.1.0):
dependencies:
detect-node-es: 1.1.0
File diff suppressed because one or more lines are too long
+93
View File
@@ -0,0 +1,93 @@
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "我需要仔细分析用户的问题。用户想了解什么是vibe coding。这是一个相对较新的概念,我需要收集相关信息来提供全面的答案。"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n首先,我应该理解vibe coding的基本定义和概念。这可能涉及编程文化、开发方法论或者特定的编程风格。"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n然后,我需要研究它的起源、核心理念,以及在实际开发中的应用。这将帮助我提供一个全面而准确的答案。"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n让我思考一下需要收集哪些具体信息:\n1. Vibe coding的定义和起源\n2. 核心理念和哲学\n3. 实际应用场景和案例\n4. 与传统编程方法的区别\n5. 社区和工具支持"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n基于这些思考,我认为需要进行深入的研究来收集足够的信息。现在我将制定一个详细的研究计划。"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "{"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"locale\": \"zh-CN\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"has_enough_context\": false,"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"thought\": \"用户想了解vibe coding的概念。"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "由于目前没有足够的信息来全面回答这个问题,"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "我需要收集更多相关数据。\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"title\": \"Vibe Coding 概念研究\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"steps\": ["}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n {"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"need_search\": true,"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"title\": \"Vibe Coding 基本定义和概念\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"description\": \"收集关于vibe coding的基本定义、"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "起源、核心概念和目标的信息。"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "查找官方定义、行业专家的解释以及相关的编程文化背景。\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"step_type\": \"research\""}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n },"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n {"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"need_search\": true,"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"title\": \"实际应用案例和最佳实践\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"description\": \"研究vibe coding在实际项目中的应用案例,"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "了解最佳实践和常见的实现方法。\","}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"step_type\": \"research\""}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n }"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n ]"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n}"}
event: message_chunk
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "finish_reason": "stop"}
@@ -2,17 +2,12 @@
// SPDX-License-Identifier: MIT
import { motion } from "framer-motion";
import { useTranslations } from "next-intl";
import { cn } from "~/lib/utils";
import { Welcome } from "./welcome";
const questions = [
"How many times taller is the Eiffel Tower than the tallest building in the world?",
"How many years does an average Tesla battery last compared to a gasoline engine?",
"How many liters of water are required to produce 1 kg of beef?",
"How many times faster is the speed of light compared to the speed of sound?",
];
export function ConversationStarter({
className,
onSend,
@@ -20,6 +15,9 @@ export function ConversationStarter({
className?: string;
onSend?: (message: string) => void;
}) {
const t = useTranslations("chat");
const questions = t.raw("conversationStarters") as string[];
return (
<div className={cn("flex flex-col items-center", className)}>
<div className="pointer-events-none fixed inset-0 flex items-center justify-center">
@@ -41,7 +39,7 @@ export function ConversationStarter({
}}
>
<div
className="bg-card text-muted-foreground cursor-pointer rounded-2xl border px-4 py-4 opacity-75 transition-all duration-300 hover:opacity-100 hover:shadow-md"
className="bg-card text-muted-foreground h-full w-full cursor-pointer rounded-2xl border px-4 py-4 opacity-75 transition-all duration-300 hover:opacity-100 hover:shadow-md"
onClick={() => {
onSend?.(question);
}}
+53 -11
View File
@@ -3,8 +3,9 @@
import { MagicWandIcon } from "@radix-ui/react-icons";
import { AnimatePresence, motion } from "framer-motion";
import { ArrowUp, X } from "lucide-react";
import { useCallback, useRef, useState } from "react";
import { ArrowUp, Lightbulb, X } from "lucide-react";
import { useTranslations } from "next-intl";
import { useCallback, useMemo, useRef, useState } from "react";
import { Detective } from "~/components/deer-flow/icons/detective";
import MessageInput, {
@@ -15,8 +16,10 @@ import { Tooltip } from "~/components/deer-flow/tooltip";
import { BorderBeam } from "~/components/magicui/border-beam";
import { Button } from "~/components/ui/button";
import { enhancePrompt } from "~/core/api";
import { useConfig } from "~/core/api/hooks";
import type { Option, Resource } from "~/core/messages";
import {
setEnableDeepThinking,
setEnableBackgroundInvestigation,
useSettingsStore,
} from "~/core/store";
@@ -44,9 +47,15 @@ export function InputBox({
onCancel?: () => void;
onRemoveFeedback?: () => void;
}) {
const t = useTranslations("chat.inputBox");
const tCommon = useTranslations("common");
const enableDeepThinking = useSettingsStore(
(state) => state.general.enableDeepThinking,
);
const backgroundInvestigation = useSettingsStore(
(state) => state.general.enableBackgroundInvestigation,
);
const { config, loading } = useConfig();
const reportStyle = useSettingsStore((state) => state.general.reportStyle);
const containerRef = useRef<HTMLDivElement>(null);
const inputRef = useRef<MessageInputRef>(null);
@@ -197,24 +206,57 @@ export function InputBox({
isEnhanceAnimating && "transition-all duration-500",
)}
ref={inputRef}
loading={loading}
config={config}
onEnter={handleSendMessage}
onChange={setCurrentPrompt}
/>
</div>
<div className="flex items-center px-4 py-2">
<div className="flex grow gap-2">
{config?.models.reasoning?.[0] && (
<Tooltip
className="max-w-60"
title={
<div>
<h3 className="mb-2 font-bold">
{t("deepThinkingTooltip.title", {
status: enableDeepThinking ? t("on") : t("off"),
})}
</h3>
<p>
{t("deepThinkingTooltip.description", {
model: config.models.reasoning?.[0] ?? "",
})}
</p>
</div>
}
>
<Button
className={cn(
"rounded-2xl",
enableDeepThinking && "!border-brand !text-brand",
)}
variant="outline"
onClick={() => {
setEnableDeepThinking(!enableDeepThinking);
}}
>
<Lightbulb /> {t("deepThinking")}
</Button>
</Tooltip>
)}
<Tooltip
className="max-w-60"
title={
<div>
<h3 className="mb-2 font-bold">
Investigation Mode: {backgroundInvestigation ? "On" : "Off"}
{t("investigationTooltip.title", {
status: backgroundInvestigation ? t("on") : t("off"),
})}
</h3>
<p>
When enabled, DeerFlow will perform a quick search before
planning. This is useful for researches related to ongoing
events and news.
</p>
<p>{t("investigationTooltip.description")}</p>
</div>
}
>
@@ -228,13 +270,13 @@ export function InputBox({
setEnableBackgroundInvestigation(!backgroundInvestigation)
}
>
<Detective /> Investigation
<Detective /> {t("investigation")}
</Button>
</Tooltip>
<ReportStyleDialog />
</div>
<div className="flex shrink-0 items-center gap-2">
<Tooltip title="Enhance prompt with AI">
<Tooltip title={t("enhancePrompt")}>
<Button
variant="ghost"
size="icon"
@@ -254,7 +296,7 @@ export function InputBox({
)}
</Button>
</Tooltip>
<Tooltip title={responding ? "Stop" : "Send"}>
<Tooltip title={responding ? tCommon("stop") : tCommon("send")}>
<Button
variant="outline"
size="icon"
+226 -69
View File
@@ -3,8 +3,15 @@
import { LoadingOutlined } from "@ant-design/icons";
import { motion } from "framer-motion";
import { Download, Headphones } from "lucide-react";
import { useCallback, useMemo, useRef, useState } from "react";
import {
Download,
Headphones,
ChevronDown,
ChevronRight,
Lightbulb,
} from "lucide-react";
import { useTranslations } from "next-intl";
import React, { useCallback, useMemo, useRef, useState } from "react";
import { LoadingAnimation } from "~/components/deer-flow/loading-animation";
import { Markdown } from "~/components/deer-flow/markdown";
@@ -23,6 +30,11 @@ import {
CardHeader,
CardTitle,
} from "~/components/ui/card";
import {
Collapsible,
CollapsibleContent,
CollapsibleTrigger,
} from "~/components/ui/collapsible";
import type { Message, Option } from "~/core/messages";
import {
closeResearch,
@@ -241,6 +253,7 @@ function ResearchCard({
researchId: string;
onToggleResearch?: () => void;
}) {
const t = useTranslations("chat.research");
const reportId = useStore((state) => state.researchReportIds.get(researchId));
const hasReport = reportId !== undefined;
const reportGenerating = useStore(
@@ -249,10 +262,10 @@ function ResearchCard({
const openResearchId = useStore((state) => state.openResearchId);
const state = useMemo(() => {
if (hasReport) {
return reportGenerating ? "Generating report..." : "Report generated";
return reportGenerating ? t("generatingReport") : t("reportGenerated");
}
return "Researching...";
}, [hasReport, reportGenerating]);
return t("researching");
}, [hasReport, reportGenerating, t]);
const msg = useResearchMessage(researchId);
const title = useMemo(() => {
if (msg) {
@@ -272,8 +285,8 @@ function ResearchCard({
<Card className={cn("w-full", className)}>
<CardHeader>
<CardTitle>
<RainbowText animated={state !== "Report generated"}>
{title !== undefined && title !== "" ? title : "Deep Research"}
<RainbowText animated={state !== t("reportGenerated")}>
{title !== undefined && title !== "" ? title : t("deepResearch")}
</RainbowText>
</CardTitle>
</CardHeader>
@@ -286,7 +299,7 @@ function ResearchCard({
variant={!openResearchId ? "default" : "outline"}
onClick={handleOpen}
>
{researchId !== openResearchId ? "Open" : "Close"}
{researchId !== openResearchId ? t("open") : t("close")}
</Button>
</div>
</CardFooter>
@@ -294,6 +307,115 @@ function ResearchCard({
);
}
function ThoughtBlock({
className,
content,
isStreaming,
hasMainContent,
}: {
className?: string;
content: string;
isStreaming?: boolean;
hasMainContent?: boolean;
}) {
const t = useTranslations("chat.research");
const [isOpen, setIsOpen] = useState(true);
const [hasAutoCollapsed, setHasAutoCollapsed] = useState(false);
React.useEffect(() => {
if (hasMainContent && !hasAutoCollapsed) {
setIsOpen(false);
setHasAutoCollapsed(true);
}
}, [hasMainContent, hasAutoCollapsed]);
if (!content || content.trim() === "") {
return null;
}
return (
<div className={cn("mb-6 w-full", className)}>
<Collapsible open={isOpen} onOpenChange={setIsOpen}>
<CollapsibleTrigger asChild>
<Button
variant="ghost"
className={cn(
"h-auto w-full justify-start rounded-xl border px-6 py-4 text-left transition-all duration-200",
"hover:bg-accent hover:text-accent-foreground",
isStreaming
? "border-primary/20 bg-primary/5 shadow-sm"
: "border-border bg-card",
)}
>
<div className="flex w-full items-center gap-3">
<Lightbulb
size={18}
className={cn(
"shrink-0 transition-colors duration-200",
isStreaming ? "text-primary" : "text-muted-foreground",
)}
/>
<span
className={cn(
"leading-none font-semibold transition-colors duration-200",
isStreaming ? "text-primary" : "text-foreground",
)}
>
{t("deepThinking")}
</span>
{isStreaming && <LoadingAnimation className="ml-2 scale-75" />}
<div className="flex-grow" />
{isOpen ? (
<ChevronDown
size={16}
className="text-muted-foreground transition-transform duration-200"
/>
) : (
<ChevronRight
size={16}
className="text-muted-foreground transition-transform duration-200"
/>
)}
</div>
</Button>
</CollapsibleTrigger>
<CollapsibleContent className="data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:slide-up-2 data-[state=open]:slide-down-2 mt-3">
<Card
className={cn(
"transition-all duration-200",
isStreaming ? "border-primary/20 bg-primary/5" : "border-border",
)}
>
<CardContent>
<div className="flex h-40 w-full overflow-y-auto">
<ScrollContainer
className={cn(
"flex h-full w-full flex-col overflow-hidden",
className,
)}
scrollShadow={false}
autoScrollToBottom
>
<Markdown
className={cn(
"prose dark:prose-invert max-w-none transition-colors duration-200",
isStreaming ? "prose-primary" : "opacity-80",
)}
animated={isStreaming}
>
{content}
</Markdown>
</ScrollContainer>
</div>
</CardContent>
</Card>
</CollapsibleContent>
</Collapsible>
</div>
);
}
const GREETINGS = ["Cool", "Sounds great", "Looks good", "Great", "Awesome"];
function PlanCard({
className,
@@ -313,6 +435,7 @@ function PlanCard({
) => void;
waitForFeedback?: boolean;
}) {
const t = useTranslations("chat.research");
const plan = useMemo<{
title?: string;
thought?: string;
@@ -320,6 +443,17 @@ function PlanCard({
}>(() => {
return parseJSON(message.content ?? "", {});
}, [message.content]);
const reasoningContent = message.reasoningContent;
const hasMainContent = Boolean(
message.content && message.content.trim() !== "",
);
// 判断是否正在思考:有推理内容但还没有主要内容
const isThinking = Boolean(reasoningContent && !hasMainContent);
// 判断是否应该显示计划:有主要内容就显示(无论是否还在流式传输)
const shouldShowPlan = hasMainContent;
const handleAccept = useCallback(async () => {
if (onSendMessage) {
onSendMessage(
@@ -331,67 +465,90 @@ function PlanCard({
}
}, [onSendMessage]);
return (
<Card className={cn("w-full", className)}>
<CardHeader>
<CardTitle>
<Markdown animated>
{`### ${
plan.title !== undefined && plan.title !== ""
? plan.title
: "Deep Research"
}`}
</Markdown>
</CardTitle>
</CardHeader>
<CardContent>
<Markdown className="opacity-80" animated>
{plan.thought}
</Markdown>
{plan.steps && (
<ul className="my-2 flex list-decimal flex-col gap-4 border-l-[2px] pl-8">
{plan.steps.map((step, i) => (
<li key={`step-${i}`}>
<h3 className="mb text-lg font-medium">
<Markdown animated>{step.title}</Markdown>
</h3>
<div className="text-muted-foreground text-sm">
<Markdown animated>{step.description}</Markdown>
</div>
</li>
))}
</ul>
)}
</CardContent>
<CardFooter className="flex justify-end">
{!message.isStreaming && interruptMessage?.options?.length && (
<motion.div
className="flex gap-2"
initial={{ opacity: 0, y: 12 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3, delay: 0.3 }}
>
{interruptMessage?.options.map((option) => (
<Button
key={option.value}
variant={option.value === "accepted" ? "default" : "outline"}
disabled={!waitForFeedback}
onClick={() => {
if (option.value === "accepted") {
void handleAccept();
} else {
onFeedback?.({
option,
});
}
}}
>
{option.text}
</Button>
))}
</motion.div>
)}
</CardFooter>
</Card>
<div className={cn("w-full", className)}>
{reasoningContent && (
<ThoughtBlock
content={reasoningContent}
isStreaming={isThinking}
hasMainContent={hasMainContent}
/>
)}
{shouldShowPlan && (
<motion.div
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3, ease: "easeOut" }}
>
<Card className="w-full">
<CardHeader>
<CardTitle>
<Markdown animated={message.isStreaming}>
{`### ${
plan.title !== undefined && plan.title !== ""
? plan.title
: t("deepResearch")
}`}
</Markdown>
</CardTitle>
</CardHeader>
<CardContent>
<Markdown className="opacity-80" animated={message.isStreaming}>
{plan.thought}
</Markdown>
{plan.steps && (
<ul className="my-2 flex list-decimal flex-col gap-4 border-l-[2px] pl-8">
{plan.steps.map((step, i) => (
<li key={`step-${i}`}>
<h3 className="mb text-lg font-medium">
<Markdown animated={message.isStreaming}>
{step.title}
</Markdown>
</h3>
<div className="text-muted-foreground text-sm">
<Markdown animated={message.isStreaming}>
{step.description}
</Markdown>
</div>
</li>
))}
</ul>
)}
</CardContent>
<CardFooter className="flex justify-end">
{!message.isStreaming && interruptMessage?.options?.length && (
<motion.div
className="flex gap-2"
initial={{ opacity: 0, y: 12 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.3, delay: 0.3 }}
>
{interruptMessage?.options.map((option) => (
<Button
key={option.value}
variant={
option.value === "accepted" ? "default" : "outline"
}
disabled={!waitForFeedback}
onClick={() => {
if (option.value === "accepted") {
void handleAccept();
} else {
onFeedback?.({
option,
});
}
}}
>
{option.text}
</Button>
))}
</motion.div>
)}
</CardFooter>
</Card>
</motion.div>
)}
</div>
);
}
+11 -10
View File
@@ -3,6 +3,7 @@
import { motion } from "framer-motion";
import { FastForward, Play } from "lucide-react";
import { useTranslations } from "next-intl";
import { useCallback, useRef, useState } from "react";
import { RainbowText } from "~/components/deer-flow/rainbow-text";
@@ -27,6 +28,7 @@ import { MessageListView } from "./message-list-view";
import { Welcome } from "./welcome";
export function MessagesBlock({ className }: { className?: string }) {
const t = useTranslations("chat.messages");
const messageIds = useMessageIds();
const messageCount = messageIds.length;
const responding = useStore((state) => state.responding);
@@ -152,16 +154,16 @@ export function MessagesBlock({ className }: { className?: string }) {
<CardHeader className={cn("flex-grow", responding && "pl-3")}>
<CardTitle>
<RainbowText animated={responding}>
{responding ? "Replaying" : `${replayTitle}`}
{responding ? t("replaying") : `${replayTitle}`}
</RainbowText>
</CardTitle>
<CardDescription>
<RainbowText animated={responding}>
{responding
? "DeerFlow is now replaying the conversation..."
? t("replayDescription")
: replayStarted
? "The replay has been stopped."
: `You're now in DeerFlow's replay mode. Click the "Play" button on the right to start.`}
? t("replayHasStopped")
: t("replayModeDescription")}
</RainbowText>
</CardDescription>
</CardHeader>
@@ -175,13 +177,13 @@ export function MessagesBlock({ className }: { className?: string }) {
onClick={handleFastForwardReplay}
>
<FastForward size={16} />
Fast Forward
{t("fastForward")}
</Button>
)}
{!replayStarted && (
<Button className="w-24" onClick={handleStartReplay}>
<Play size={16} />
Play
{t("play")}
</Button>
)}
</div>
@@ -190,17 +192,16 @@ export function MessagesBlock({ className }: { className?: string }) {
</Card>
{!replayStarted && env.NEXT_PUBLIC_STATIC_WEBSITE_ONLY && (
<div className="text-muted-foreground w-full text-center text-xs">
* This site is for demo purposes only. If you want to try your
own question, please{" "}
{t("demoNotice")}{" "}
<a
className="underline"
href="https://github.com/bytedance/deer-flow"
target="_blank"
rel="noopener noreferrer"
>
click here
{t("clickHere")}
</a>{" "}
to clone it locally and run it.
{t("cloneLocally")}
</div>
)}
</motion.div>
@@ -5,6 +5,7 @@ import { PythonOutlined } from "@ant-design/icons";
import { motion } from "framer-motion";
import { LRUCache } from "lru-cache";
import { BookOpenText, FileText, PencilRuler, Search } from "lucide-react";
import { useTranslations } from "next-intl";
import { useTheme } from "next-themes";
import { useMemo } from "react";
import SyntaxHighlighter from "react-syntax-highlighter";
@@ -122,6 +123,7 @@ type SearchResult =
};
function WebSearchToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const t = useTranslations("chat.research");
const searching = useMemo(() => {
return toolCall.result === undefined;
}, [toolCall.result]);
@@ -159,7 +161,7 @@ function WebSearchToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
animated={searchResults === undefined}
>
<Search size={16} className={"mr-2"} />
<span>Searching for&nbsp;</span>
<span>{t("searchingFor")}&nbsp;</span>
<span className="max-w-[500px] overflow-hidden text-ellipsis whitespace-nowrap">
{(toolCall.args as { query: string }).query}
</span>
@@ -238,6 +240,7 @@ function WebSearchToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
}
function CrawlToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const t = useTranslations("chat.research");
const url = useMemo(
() => (toolCall.args as { url: string }).url,
[toolCall.args],
@@ -251,7 +254,7 @@ function CrawlToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
animated={toolCall.result === undefined}
>
<BookOpenText size={16} className={"mr-2"} />
<span>Reading</span>
<span>{t("reading")}</span>
</RainbowText>
</div>
<ul className="mt-2 flex flex-wrap gap-4">
@@ -279,6 +282,7 @@ function CrawlToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
}
function RetrieverToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const t = useTranslations("chat.research");
const searching = useMemo(() => {
return toolCall.result === undefined;
}, [toolCall.result]);
@@ -292,7 +296,7 @@ function RetrieverToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
<div className="font-medium italic">
<RainbowText className="flex items-center" animated={searching}>
<Search size={16} className={"mr-2"} />
<span>Retrieving documents from RAG&nbsp;</span>
<span>{t("retrievingDocuments")}&nbsp;</span>
<span className="max-w-[500px] overflow-hidden text-ellipsis whitespace-nowrap">
{(toolCall.args as { keywords: string }).keywords}
</span>
@@ -337,6 +341,7 @@ function RetrieverToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
}
function PythonToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const t = useTranslations("chat.research");
const code = useMemo<string | undefined>(() => {
return (toolCall.args as { code?: string }).code;
}, [toolCall.args]);
@@ -349,7 +354,7 @@ function PythonToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
className="text-base font-medium italic"
animated={toolCall.result === undefined}
>
Running Python code
{t("runningPythonCode")}
</RainbowText>
</div>
<div>
@@ -373,6 +378,7 @@ function PythonToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
}
function PythonToolCallResult({ result }: { result: string }) {
const t = useTranslations("chat.research");
const { resolvedTheme } = useTheme();
const hasError = useMemo(
() => result.includes("Error executing code:\n"),
@@ -399,7 +405,7 @@ function PythonToolCallResult({ result }: { result: string }) {
return (
<>
<div className="mt-4 font-medium italic">
{hasError ? "Error when executing the above code" : "Execution output"}
{hasError ? t("errorExecutingCode") : t("executionOutput")}
</div>
<div className="bg-accent mt-2 max-h-[400px] max-w-[calc(100%-120px)] overflow-y-auto rounded-md p-2 text-sm">
<SyntaxHighlighter
@@ -2,6 +2,7 @@
// SPDX-License-Identifier: MIT
import { Check, Copy, Headphones, Pencil, Undo2, X, Download } from "lucide-react";
import { useTranslations } from "next-intl";
import { useCallback, useEffect, useState } from "react";
import { ScrollContainer } from "~/components/deer-flow/scroll-container";
@@ -23,6 +24,7 @@ export function ResearchBlock({
className?: string;
researchId: string | null;
}) {
const t = useTranslations("chat.research");
const reportId = useStore((state) =>
researchId ? state.researchReportIds.get(researchId) : undefined,
);
@@ -108,7 +110,7 @@ export function ResearchBlock({
<div className="absolute right-4 flex h-9 items-center justify-center">
{hasReport && !reportStreaming && (
<>
<Tooltip title="Generate podcast">
<Tooltip title={t("generatePodcast")}>
<Button
className="text-gray-400"
size="icon"
@@ -119,7 +121,7 @@ export function ResearchBlock({
<Headphones />
</Button>
</Tooltip>
<Tooltip title="Edit">
<Tooltip title={t("edit")}>
<Button
className="text-gray-400"
size="icon"
@@ -130,7 +132,7 @@ export function ResearchBlock({
{editing ? <Undo2 /> : <Pencil />}
</Button>
</Tooltip>
<Tooltip title="Copy">
<Tooltip title={t("copy")}>
<Button
className="text-gray-400"
size="icon"
@@ -140,7 +142,7 @@ export function ResearchBlock({
{copied ? <Check /> : <Copy />}
</Button>
</Tooltip>
<Tooltip title="Download report as markdown">
<Tooltip title={t("downloadReport")}>
<Button
className="text-gray-400"
size="icon"
@@ -152,7 +154,7 @@ export function ResearchBlock({
</Tooltip>
</>
)}
<Tooltip title="Close">
<Tooltip title={t("close")}>
<Button
className="text-gray-400"
size="sm"
@@ -177,10 +179,10 @@ export function ResearchBlock({
value="report"
disabled={!hasReport}
>
Report
{t("report")}
</TabsTrigger>
<TabsTrigger className="px-8" value="activities">
Activities
{t("activities")}
</TabsTrigger>
</TabsList>
</div>
+8 -3
View File
@@ -3,12 +3,16 @@
import { StarFilledIcon, GitHubLogoIcon } from "@radix-ui/react-icons";
import Link from "next/link";
import { useTranslations } from 'next-intl';
import { LanguageSwitcher } from "~/components/deer-flow/language-switcher";
import { NumberTicker } from "~/components/magicui/number-ticker";
import { Button } from "~/components/ui/button";
import { env } from "~/env";
export async function SiteHeader() {
export function SiteHeader() {
const t = useTranslations('common');
return (
<header className="supports-backdrop-blur:bg-background/80 bg-background/40 sticky top-0 left-0 z-40 flex h-15 w-full flex-col items-center backdrop-blur-lg">
<div className="container flex h-15 items-center justify-between px-3">
@@ -16,7 +20,8 @@ export async function SiteHeader() {
<span className="mr-1 text-2xl">🦌</span>
<span>DeerFlow</span>
</div>
<div className="relative flex items-center">
<div className="relative flex items-center gap-2">
<LanguageSwitcher />
<div
className="pointer-events-none absolute inset-0 z-0 h-full w-full rounded-full opacity-60 blur-2xl"
style={{
@@ -32,7 +37,7 @@ export async function SiteHeader() {
>
<Link href="https://github.com/bytedance/deer-flow" target="_blank">
<GitHubLogoIcon className="size-4" />
Star on GitHub
{t('starOnGitHub')}
{env.NEXT_PUBLIC_STATIC_WEBSITE_ONLY &&
env.GITHUB_OAUTH_TOKEN && <StarCounter />}
</Link>
+5 -14
View File
@@ -2,10 +2,13 @@
// SPDX-License-Identifier: MIT
import { motion } from "framer-motion";
import { useTranslations } from "next-intl";
import { cn } from "~/lib/utils";
export function Welcome({ className }: { className?: string }) {
const t = useTranslations("chat.welcome");
return (
<motion.div
className={cn("flex flex-col", className)}
@@ -13,21 +16,9 @@ export function Welcome({ className }: { className?: string }) {
initial={{ opacity: 0, scale: 0.85 }}
animate={{ opacity: 1, scale: 1 }}
>
<h3 className="mb-2 text-center text-3xl font-medium">
👋 Hello, there!
</h3>
<h3 className="mb-2 text-center text-3xl font-medium">{t("greeting")}</h3>
<div className="text-muted-foreground px-4 text-center text-lg">
Welcome to{" "}
<a
href="https://github.com/bytedance/deer-flow"
target="_blank"
rel="noopener noreferrer"
className="hover:underline"
>
🦌 DeerFlow
</a>
, a deep research assistant built on cutting-edge language models, helps
you search on web, browse information, and handle complex tasks.
{t("description")}
</div>
</motion.div>
);

Some files were not shown because too many files have changed in this diff Show More