Compare commits

...

55 Commits

Author SHA1 Message Date
Henry Li 03df43feb1 docs: add VolcEngine introduction. 2025-06-12 13:28:41 +08:00
Willem Jiang ee1af78767 test: added unit tests for rag (#298)
* test: added unit tests for rag

* reformate the code
2025-06-11 19:46:08 +08:00
Willem Jiang 2554e4ba63 test: add unit tests of llms (#299) 2025-06-11 19:46:01 +08:00
LeoJiaXin 397ac57235 fix: input text not clear when click submit button (#303) 2025-06-11 11:11:48 +08:00
DanielWalnut 447e427fd3 refactor: refine teh background check logic (#306) 2025-06-11 11:10:02 +08:00
Muharrem Okutan eeff1ebf80 feat: added report download button (#78) 2025-06-11 09:50:48 +08:00
DanielWalnut 1cd6aa0ece feat: implement enhance prompt (#294)
* feat: implement enhance prompt

* add unit test

* fix prompt

* fix: fix eslint and compiling issues

* feat: add border-beam animation

* fix: fix importing issues

---------

Co-authored-by: Henry Li <henry1943@163.com>
2025-06-08 19:41:59 +08:00
Willem Jiang 8081a14c21 test:unit tests for configuration (#291)
* test:unit tests for configuration

* test: update the test_configuration.py file

* test: reformate the test codes
2025-06-07 21:51:26 +08:00
Willem Jiang c6ed423021 test: add unit tests of crawler (#292)
* test: add unit tests of crawler

* test: polish the code of crawler unit tests
2025-06-07 21:51:05 +08:00
DanielWalnut 0e22c373af feat: support to adjust writing style (#290)
* feat: implment backend for adjust report style

* feat: add web part

* fix test cases

* fix: fix typing

---------

Co-authored-by: Henry Li <henry1943@163.com>
2025-06-07 20:48:39 +08:00
Xintao Wang cda3870add fix: enable proxy support in aiohttp by adding trust_env=True (#289) 2025-06-07 15:30:13 +08:00
DanielWalnut b5ec61bb9d refactor: refine the graph structure (#283) 2025-06-05 12:47:17 +08:00
JeffJiang 73ac8ae45a fix: web start with dotenv (#282) 2025-06-05 11:53:49 +08:00
Xavi 91648c4210 fix: correct placeholder for API key in configuration guide (#278) 2025-06-05 09:46:47 +08:00
Willem Jiang 95257800d2 fix: do not return the server side exception to client (#277) 2025-06-05 09:23:42 +08:00
Willem Jiang 45568ca95b fix:added sanitizing check on the log message (#272)
* fix:added sanitizing check on the log message

* fix: reformat the codes
2025-06-03 11:50:54 +08:00
Willem Jiang db3e74629f fix: added permissions setting in the workflow (#273)
* fix: added permissions setting in the workflow

* fix: reformat the code of src/tools/retriever.py
2025-06-03 11:48:51 +08:00
SToneX 0da52d41a7 feat(chat): add animated deer to response indicator (#269) 2025-05-31 19:13:13 +08:00
Aeolusw eaaad27e44 fix: normalize line endings for consistent chunk splitting (#235) 2025-05-29 20:46:57 +08:00
JeffJiang 4ddd659d8d feat: rag retrieving tool call result display (#263)
* feat: local search tool call result display

* chore: add file copyright

* fix: miss edit plan interrupt feedback

* feat: disable pasting html into input box
2025-05-29 19:52:34 +08:00
JeffJiang 7e9fbed918 fix: editing plan style (#261) 2025-05-29 10:46:05 +08:00
JeffJiang fcbc7f1118 revert: scroll container display change (#258) 2025-05-28 19:23:32 +08:00
JeffJiang d14fb262ea fix: message block width (#257) 2025-05-28 19:11:20 +08:00
JeffJiang 9888098f8a fix: message input box reflow (#252) 2025-05-28 16:38:28 +08:00
DanielWalnut 56e35c6b7f feat: support llm env in env file (#251) 2025-05-28 16:21:40 +08:00
JeffJiang 462752b462 feat: RAG Integration (#238)
* feat: add rag provider and retriever

* feat: retriever tool

* feat: add retriever tool to the researcher node

* feat: add rag http apis

* feat: new message input supports resource mentions

* feat: new message input component support resource mentions

* refactor: need_web_search to need_search

* chore: RAG integration docs

* chore: change example api host

* fix: user message color in dark mode

* fix: mentions style

* feat: add local_search_tool to researcher prompt

* chore: research prompt

* fix: ragflow page size and reporter with

* docs: ragflow integration and add acknowledgment projects

* chore: format
2025-05-28 14:13:46 +08:00
DanielWalnut 0565ab6d27 fix: fix unittes & background investigation search logic (#247) 2025-05-28 14:05:34 +08:00
wushiai1109 29be360954 Update nodes.py (#242)
SELECTED_SEARCH_ENGINE impossible equal to SearchEngine.ARXIV, should be SearchEngine.ARXIV.value, or use the encapsulated get_web_search_tool
2025-05-27 18:58:14 +08:00
Harsha Vardhan Mannem 3ed70e11d5 Fix/server error handling (#212)
* chore: add venv/ to gitignore

* fix: add server error handling and graceful shutdown

* Fix linting issues in server.py
2025-05-22 13:45:07 +08:00
laundry 55ce399969 test: add background node unit test (#198)
* test: add background node unit test

Change-Id: Ia99f5a1687464387dcb01bbee04deaa371c6e490

* test: add background node unit test

Change-Id: I9aabcf02ff04fda40c56f3ea22abe6b8f93bf9b6

* test: fix test error

Change-Id: I3997dc53a2cfaa35501a1fbda5902ee15528124e

* test: fix unit test error

Change-Id: If4c4cd10673e76a30945674c7cda198aeabf28d0

* test: fix unit test error

Change-Id: I3dd7a6179132e5497a30ada443d88de0c47af3d4
2025-05-20 14:25:35 +08:00
DanielWalnut 8bbcdbe4de feat: config max_search_results for search engine (#192)
* feat: implement UI

* feat: config max_search_results for search engine via api

---------

Co-authored-by: Henry Li <henry1943@163.com>
2025-05-18 13:23:52 +08:00
changqingla c6bbc595c3 Fix :This PR can resolve the issue of exceeding the default tool invocation limit by setting the recursion limit through an environment variable.mit (#138)
* set ecursion limit

* set ecursion limit

* fix:check if the recession_limit within a reasonalbe range

* style: format code with black
2025-05-17 20:37:03 -07:00
牧毅 ffe706d0df Allow concurrently run server.py and web in production mode. (#25)
* Allow concurrently run server.py and web in production mode.

* Allow concurrently run server.py and web in production mode.

* Allow concurrently run server.py and web in production mode.
2025-05-17 20:33:00 -07:00
DanielWalnut f7d79b6d83 refactor: upgrade langgraph version (#148) 2025-05-18 11:29:41 +08:00
cndoit18 d69128495b feat: add .venv to dockerignore and optimize Dockerfile with cache mounts for uv (#145)
Change-Id: I27ff2d4f9bcdedbd0135e109ecb6aa6d78bc488b
2025-05-17 21:21:55 +08:00
Willem Jiang 9dc78c3829 fix:added the Portuguese README entry to the README files (#184) 2025-05-16 21:43:01 +08:00
Ernâni de Britto Murtinho 96fb5d653b Added Portuguese pt-br Readme File Version (#127) 2025-05-16 21:10:17 +08:00
Wang Hao e27c43f005 fix: add model_dump (#137)
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
2025-05-16 21:05:46 +08:00
hao-cyber c3886e635d docs: add Spanish and Russian translations for README (#183) 2025-05-16 20:56:04 +08:00
Zhengbin Sun c046d9cc34 fix: update responsive design calculations for chat layout (#168) 2025-05-16 11:40:26 +08:00
XingLiu0923 9cff113862 feat(ut): add ut coverage check (#170) 2025-05-15 08:56:13 -07:00
Leo Hui a43db94fb6 feat: refactor crawler trust link style (#166)
* feat: refactor crawler trust link style

* feat: enhance link credibility checks in Markdown and related components
2025-05-15 17:17:10 +08:00
JeffJiang 8802eea0ba fix: report editor styles (#163)
* fix: report editor styles
2025-05-15 15:18:01 +08:00
Leo Hui 1a59accb52 fix: adjust slider width for responsive design in multi-agent visualization (#134) 2025-05-15 11:59:16 +08:00
JeffJiang 86295ed195 fix: hallucination link warn (#158) 2025-05-15 10:58:24 +08:00
JeffJiang bf4820c68f Check the output links are hallucinations from AI (#139)
* feat: check output links if a hallucination from AI
2025-05-15 10:39:53 +08:00
Abeautifulsnow 25e7b86f02 optimize docker backend image size (#130) 2025-05-15 09:52:14 +08:00
Maxim Kot 0459e3c9f8 Update README.md (#122)
Fixed the link to the configuration from the docker section
2025-05-15 08:54:33 +08:00
DanielWalnut 5cc0e61297 refactor: refine the step execute human message (#144) 2025-05-14 18:54:14 +08:00
Henry Li a220f4b6ea feat: add python result and error handling (#141) 2025-05-14 03:47:28 -07:00
DanielWalnut f73a7a229c refactor: add existing research findings into step human message (#140) 2025-05-14 18:40:14 +08:00
laundry d983149984 fix:rollback to fix the update version error (#136)
* fix: fix package error

Change-Id: I0f7a962656df7e45da03b591296fbf3afc398b64

* fix: rollback uv.lock

Change-Id: I465849c0d3d7a4d0757ecd79ff156feab1217f70
2025-05-14 17:06:53 +08:00
laundry 3d5e579ebd fix: fix start error when search engine is not tavliy and env TAVILY_API_KEY not exist (#133)
Change-Id: I58e865a11e89acaa3c0b884578cd995d0e9b5422
2025-05-14 14:45:36 +08:00
Leo Hui a14ca92c36 refactor: extract link and image components for Markdown rendering (#119) 2025-05-14 10:45:34 +08:00
XingLiu0923 b85a7592dc feat(trace): add langsmith tracing (#126) 2025-05-14 10:12:50 +08:00
111 changed files with 16520 additions and 469 deletions
+1
View File
@@ -26,6 +26,7 @@ wheels/
*.egg-info/
.installed.cfg
*.egg
.venv/
# Web
node_modules
+14
View File
@@ -5,17 +5,31 @@ APP_ENV=development
# docker build args
NEXT_PUBLIC_API_URL="http://localhost:8000/api"
AGENT_RECURSION_LIMIT=30
# Search Engine, Supported values: tavily (recommended), duckduckgo, brave_search, arxiv
SEARCH_API=tavily
TAVILY_API_KEY=tvly-xxx
# BRAVE_SEARCH_API_KEY=xxx # Required only if SEARCH_API is brave_search
# JINA_API_KEY=jina_xxx # Optional, default is None
# Optional, RAG provider
# RAG_PROVIDER=ragflow
# RAGFLOW_API_URL="http://localhost:9388"
# RAGFLOW_API_KEY="ragflow-xxx"
# RAGFLOW_RETRIEVAL_SIZE=10
# Optional, volcengine TTS for generating podcast
VOLCENGINE_TTS_APPID=xxx
VOLCENGINE_TTS_ACCESS_TOKEN=xxx
# VOLCENGINE_TTS_CLUSTER=volcano_tts # Optional, default is volcano_tts
# VOLCENGINE_TTS_VOICE_TYPE=BV700_V2_streaming # Optional, default is BV700_V2_streaming
# Option, for langsmith tracing and monitoring
# LANGSMITH_TRACING=true
# LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
# LANGSMITH_API_KEY="xxx"
# LANGSMITH_PROJECT="xxx"
# [!NOTE]
# For model settings and other configurations, please refer to `docs/configuration_guide.md`
+3
View File
@@ -6,6 +6,9 @@ on:
pull_request:
branches: [ '*' ]
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
+21 -2
View File
@@ -6,6 +6,9 @@ on:
pull_request:
branches: [ '*' ]
permissions:
contents: read
jobs:
test:
runs-on: ubuntu-latest
@@ -23,7 +26,23 @@ jobs:
uv pip install -e ".[dev]"
uv pip install -e ".[test]"
- name: Run test cases
- name: Run test cases with coverage
run: |
source .venv/bin/activate
TAVILY_API_KEY=mock-key make test
TAVILY_API_KEY=mock-key make coverage
- name: Generate HTML Coverage Report
run: |
source .venv/bin/activate
python -m coverage html -d coverage_html
- name: Upload Coverage Report
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage_html/
- name: Display Coverage Summary
run: |
source .venv/bin/activate
python -m coverage report
+4
View File
@@ -11,6 +11,7 @@ static/browser_history/*.gif
# Virtual environments
.venv
venv/
# Environment variables
.env
@@ -21,3 +22,6 @@ conf.yaml
.idea/
.langgraph_api/
# coverage report
coverage.xml
coverage/
+9 -2
View File
@@ -1,15 +1,22 @@
FROM ghcr.io/astral-sh/uv:python3.12-bookworm
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
# Install uv.
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
WORKDIR /app
# Pre-cache the application dependencies.
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --locked --no-install-project
# Copy the application into the container.
COPY . /app
# Install the application dependencies.
RUN uv sync --frozen --no-cache
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --locked
EXPOSE 8000
+1 -1
View File
@@ -19,4 +19,4 @@ langgraph-dev:
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.12 langgraph dev --allow-blocking
coverage:
uv run pytest --cov=src tests/ --cov-report=term-missing
uv run pytest --cov=src tests/ --cov-report=term-missing --cov-report=xml
+37 -3
View File
@@ -2,11 +2,11 @@
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McDcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McCcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
<!-- DeepWiki badge generated by https://deepwiki.ryoppippi.com/ -->
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md)
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) | [Portuguese](./README_pt.md)
> Originated from Open Source, give back to Open Source.
@@ -189,6 +189,18 @@ SEARCH_API=tavily
- Crawling with Jina
- Advanced content extraction
- 📃 **RAG Integration**
- Supports mentioning files from [RAGFlow](https://github.com/infiniflow/ragflow) within the input box. [Start up RAGFlow server](https://ragflow.io/docs/dev/).
```bash
# .env
RAG_PROVIDER=ragflow
RAGFLOW_API_URL="http://localhost:9388"
RAGFLOW_API_KEY="ragflow-xxx"
RAGFLOW_RETRIEVAL_SIZE=10
```
- 🔗 **MCP Seamless Integration**
- Expand capabilities for private domain access, knowledge graph, web browsing and more
- Facilitates integration of diverse research tools and methodologies
@@ -347,11 +359,31 @@ When you submit a research topic in the Studio UI, you'll be able to see the ent
- The research and writing phases for each section
- The final report generation
### Enabling LangSmith Tracing
DeerFlow supports LangSmith tracing to help you debug and monitor your workflows. To enable LangSmith tracing:
1. Make sure your `.env` file has the following configurations (see `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="xxx"
LANGSMITH_PROJECT="xxx"
```
2. Start tracing and visualize the graph locally with LangSmith by running:
```bash
langgraph dev
```
This will enable trace visualization in LangGraph Studio and send your traces to LangSmith for monitoring and analysis.
## Docker
You can also run this project with Docker.
First, you need read the [configuration](#configuration) below. Make sure `.env`, `.conf.yaml` files are ready.
First, you need read the [configuration](docs/configuration_guide.md) below. Make sure `.env`, `.conf.yaml` files are ready.
Second, to build a Docker image of your own web server:
@@ -519,6 +551,8 @@ We would like to extend our sincere appreciation to the following projects for t
- **[LangChain](https://github.com/langchain-ai/langchain)**: Their exceptional framework powers our LLM interactions and chains, enabling seamless integration and functionality.
- **[LangGraph](https://github.com/langchain-ai/langgraph)**: Their innovative approach to multi-agent orchestration has been instrumental in enabling DeerFlow's sophisticated workflows.
- **[Novel](https://github.com/steven-tey/novel)**: Their Notion-style WYSIWYG editor supports our report editing and AI-assisted rewriting.
- **[RAGFlow](https://github.com/infiniflow/ragflow)**: We have achieved support for research on users' private knowledge bases through integration with RAGFlow.
These projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.
+21 -2
View File
@@ -2,10 +2,10 @@
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McDcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McCcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
<!-- DeepWiki badge generated by https://deepwiki.ryoppippi.com/ -->
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md)
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) | [Portuguese](./README_pt.md)
> Aus Open Source entstanden, an Open Source zurückgeben.
@@ -333,6 +333,25 @@ Wenn Sie ein Forschungsthema in der Studio UI einreichen, können Sie die gesamt
- Die Forschungs- und Schreibphasen für jeden Abschnitt
- Die Erstellung des endgültigen Berichts
### Aktivieren von LangSmith-Tracing
DeerFlow unterstützt LangSmith-Tracing, um Ihnen beim Debuggen und Überwachen Ihrer Workflows zu helfen. Um LangSmith-Tracing zu aktivieren:
1. Stellen Sie sicher, dass Ihre `.env`-Datei die folgenden Konfigurationen enthält (siehe `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="xxx"
LANGSMITH_PROJECT="xxx"
```
2. Starten Sie das Tracing mit LangSmith lokal, indem Sie folgenden Befehl ausführen:
```bash
langgraph dev
```
Dies aktiviert die Trace-Visualisierung in LangGraph Studio und sendet Ihre Traces zur Überwachung und Analyse an LangSmith.
## Beispiele
Die folgenden Beispiele demonstrieren die Fähigkeiten von DeerFlow:
+554
View File
@@ -0,0 +1,554 @@
# 🦌 DeerFlow
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McCcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
<!-- DeepWiki badge generated by https://deepwiki.ryoppippi.com/ -->
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) | [Portuguese](./README_pt.md)
> Originado del código abierto, retribuido al código abierto.
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) es un marco de Investigación Profunda impulsado por la comunidad que se basa en el increíble trabajo de la comunidad de código abierto. Nuestro objetivo es combinar modelos de lenguaje con herramientas especializadas para tareas como búsqueda web, rastreo y ejecución de código Python, mientras devolvemos a la comunidad que hizo esto posible.
Por favor, visita [nuestra página web oficial](https://deerflow.tech/) para más detalles.
## Demostración
### Video
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
En esta demostración, mostramos cómo usar DeerFlow para:
- Integrar perfectamente con servicios MCP
- Realizar el proceso de Investigación Profunda y producir un informe completo con imágenes
- Crear audio de podcast basado en el informe generado
### Repeticiones
- [¿Qué altura tiene la Torre Eiffel comparada con el edificio más alto?](https://deerflow.tech/chat?replay=eiffel-tower-vs-tallest-building)
- [¿Cuáles son los repositorios más populares en GitHub?](https://deerflow.tech/chat?replay=github-top-trending-repo)
- [Escribir un artículo sobre los platos tradicionales de Nanjing](https://deerflow.tech/chat?replay=nanjing-traditional-dishes)
- [¿Cómo decorar un apartamento de alquiler?](https://deerflow.tech/chat?replay=rental-apartment-decoration)
- [Visita nuestra página web oficial para explorar más repeticiones.](https://deerflow.tech/#case-studies)
---
## 📑 Tabla de Contenidos
- [🚀 Inicio Rápido](#inicio-rápido)
- [🌟 Características](#características)
- [🏗️ Arquitectura](#arquitectura)
- [🛠️ Desarrollo](#desarrollo)
- [🐳 Docker](#docker)
- [🗣️ Integración de Texto a Voz](#integración-de-texto-a-voz)
- [📚 Ejemplos](#ejemplos)
- [❓ Preguntas Frecuentes](#preguntas-frecuentes)
- [📜 Licencia](#licencia)
- [💖 Agradecimientos](#agradecimientos)
- [⭐ Historial de Estrellas](#historial-de-estrellas)
## Inicio Rápido
DeerFlow está desarrollado en Python y viene con una interfaz web escrita en Node.js. Para garantizar un proceso de configuración sin problemas, recomendamos utilizar las siguientes herramientas:
### Herramientas Recomendadas
- **[`uv`](https://docs.astral.sh/uv/getting-started/installation/):**
Simplifica la gestión del entorno Python y las dependencias. `uv` crea automáticamente un entorno virtual en el directorio raíz e instala todos los paquetes necesarios por ti—sin necesidad de instalar entornos Python manualmente.
- **[`nvm`](https://github.com/nvm-sh/nvm):**
Gestiona múltiples versiones del entorno de ejecución Node.js sin esfuerzo.
- **[`pnpm`](https://pnpm.io/installation):**
Instala y gestiona dependencias del proyecto Node.js.
### Requisitos del Entorno
Asegúrate de que tu sistema cumple con los siguientes requisitos mínimos:
- **[Python](https://www.python.org/downloads/):** Versión `3.12+`
- **[Node.js](https://nodejs.org/en/download/):** Versión `22+`
### Instalación
```bash
# Clonar el repositorio
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
# Instalar dependencias, uv se encargará del intérprete de python, la creación del entorno virtual y la instalación de los paquetes necesarios
uv sync
# Configurar .env con tus claves API
# Tavily: https://app.tavily.com/home
# Brave_SEARCH: https://brave.com/search/api/
# volcengine TTS: Añade tus credenciales TTS si las tienes
cp .env.example .env
# Ver las secciones 'Motores de Búsqueda Compatibles' e 'Integración de Texto a Voz' a continuación para todas las opciones disponibles
# Configurar conf.yaml para tu modelo LLM y claves API
# Por favor, consulta 'docs/configuration_guide.md' para más detalles
cp conf.yaml.example conf.yaml
# Instalar marp para la generación de presentaciones
# https://github.com/marp-team/marp-cli?tab=readme-ov-file#use-package-manager
brew install marp-cli
```
Opcionalmente, instala las dependencias de la interfaz web vía [pnpm](https://pnpm.io/installation):
```bash
cd deer-flow/web
pnpm install
```
### Configuraciones
Por favor, consulta la [Guía de Configuración](docs/configuration_guide.md) para más detalles.
> [!NOTA]
> Antes de iniciar el proyecto, lee la guía cuidadosamente y actualiza las configuraciones para que coincidan con tus ajustes y requisitos específicos.
### Interfaz de Consola
La forma más rápida de ejecutar el proyecto es utilizar la interfaz de consola.
```bash
# Ejecutar el proyecto en un shell tipo bash
uv run main.py
```
### Interfaz Web
Este proyecto también incluye una Interfaz Web, que ofrece una experiencia interactiva más dinámica y atractiva.
> [!NOTA]
> Necesitas instalar primero las dependencias de la interfaz web.
```bash
# Ejecutar tanto el servidor backend como el frontend en modo desarrollo
# En macOS/Linux
./bootstrap.sh -d
# En Windows
bootstrap.bat -d
```
Abre tu navegador y visita [`http://localhost:3000`](http://localhost:3000) para explorar la interfaz web.
Explora más detalles en el directorio [`web`](./web/).
## Motores de Búsqueda Compatibles
DeerFlow soporta múltiples motores de búsqueda que pueden configurarse en tu archivo `.env` usando la variable `SEARCH_API`:
- **Tavily** (predeterminado): Una API de búsqueda especializada para aplicaciones de IA
- Requiere `TAVILY_API_KEY` en tu archivo `.env`
- Regístrate en: https://app.tavily.com/home
- **DuckDuckGo**: Motor de búsqueda centrado en la privacidad
- No requiere clave API
- **Brave Search**: Motor de búsqueda centrado en la privacidad con características avanzadas
- Requiere `BRAVE_SEARCH_API_KEY` en tu archivo `.env`
- Regístrate en: https://brave.com/search/api/
- **Arxiv**: Búsqueda de artículos científicos para investigación académica
- No requiere clave API
- Especializado en artículos científicos y académicos
Para configurar tu motor de búsqueda preferido, establece la variable `SEARCH_API` en tu archivo `.env`:
```bash
# Elige uno: tavily, duckduckgo, brave_search, arxiv
SEARCH_API=tavily
```
## Características
### Capacidades Principales
- 🤖 **Integración de LLM**
- Soporta la integración de la mayoría de los modelos a través de [litellm](https://docs.litellm.ai/docs/providers).
- Soporte para modelos de código abierto como Qwen
- Interfaz API compatible con OpenAI
- Sistema LLM de múltiples niveles para diferentes complejidades de tareas
### Herramientas e Integraciones MCP
- 🔍 **Búsqueda y Recuperación**
- Búsqueda web a través de Tavily, Brave Search y más
- Rastreo con Jina
- Extracción avanzada de contenido
- 🔗 **Integración Perfecta con MCP**
- Amplía capacidades para acceso a dominio privado, gráfico de conocimiento, navegación web y más
- Facilita la integración de diversas herramientas y metodologías de investigación
### Colaboración Humana
- 🧠 **Humano en el Bucle**
- Soporta modificación interactiva de planes de investigación usando lenguaje natural
- Soporta aceptación automática de planes de investigación
- 📝 **Post-Edición de Informes**
- Soporta edición de bloques tipo Notion
- Permite refinamientos por IA, incluyendo pulido asistido por IA, acortamiento y expansión de oraciones
- Impulsado por [tiptap](https://tiptap.dev/)
### Creación de Contenido
- 🎙️ **Generación de Podcasts y Presentaciones**
- Generación de guiones de podcast y síntesis de audio impulsadas por IA
- Creación automatizada de presentaciones PowerPoint simples
- Plantillas personalizables para contenido a medida
## Arquitectura
DeerFlow implementa una arquitectura modular de sistema multi-agente diseñada para investigación automatizada y análisis de código. El sistema está construido sobre LangGraph, permitiendo un flujo de trabajo flexible basado en estados donde los componentes se comunican a través de un sistema de paso de mensajes bien definido.
![Diagrama de Arquitectura](./assets/architecture.png)
> Vélo en vivo en [deerflow.tech](https://deerflow.tech/#multi-agent-architecture)
El sistema emplea un flujo de trabajo racionalizado con los siguientes componentes:
1. **Coordinador**: El punto de entrada que gestiona el ciclo de vida del flujo de trabajo
- Inicia el proceso de investigación basado en la entrada del usuario
- Delega tareas al planificador cuando corresponde
- Actúa como la interfaz principal entre el usuario y el sistema
2. **Planificador**: Componente estratégico para descomposición y planificación de tareas
- Analiza objetivos de investigación y crea planes de ejecución estructurados
- Determina si hay suficiente contexto disponible o si se necesita más investigación
- Gestiona el flujo de investigación y decide cuándo generar el informe final
3. **Equipo de Investigación**: Una colección de agentes especializados que ejecutan el plan:
- **Investigador**: Realiza búsquedas web y recopilación de información utilizando herramientas como motores de búsqueda web, rastreo e incluso servicios MCP.
- **Programador**: Maneja análisis de código, ejecución y tareas técnicas utilizando la herramienta Python REPL.
Cada agente tiene acceso a herramientas específicas optimizadas para su rol y opera dentro del marco LangGraph
4. **Reportero**: Procesador de etapa final para los resultados de la investigación
- Agrega hallazgos del equipo de investigación
- Procesa y estructura la información recopilada
- Genera informes de investigación completos
## Integración de Texto a Voz
DeerFlow ahora incluye una función de Texto a Voz (TTS) que te permite convertir informes de investigación a voz. Esta función utiliza la API TTS de volcengine para generar audio de alta calidad a partir de texto. Características como velocidad, volumen y tono también son personalizables.
### Usando la API TTS
Puedes acceder a la funcionalidad TTS a través del punto final `/api/tts`:
```bash
# Ejemplo de llamada API usando curl
curl --location 'http://localhost:8000/api/tts' \
--header 'Content-Type: application/json' \
--data '{
"text": "Esto es una prueba de la funcionalidad de texto a voz.",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0
}' \
--output speech.mp3
```
## Desarrollo
### Pruebas
Ejecuta el conjunto de pruebas:
```bash
# Ejecutar todas las pruebas
make test
# Ejecutar archivo de prueba específico
pytest tests/integration/test_workflow.py
# Ejecutar con cobertura
make coverage
```
### Calidad del Código
```bash
# Ejecutar linting
make lint
# Formatear código
make format
```
### Depuración con LangGraph Studio
DeerFlow utiliza LangGraph para su arquitectura de flujo de trabajo. Puedes usar LangGraph Studio para depurar y visualizar el flujo de trabajo en tiempo real.
#### Ejecutando LangGraph Studio Localmente
DeerFlow incluye un archivo de configuración `langgraph.json` que define la estructura del grafo y las dependencias para LangGraph Studio. Este archivo apunta a los grafos de flujo de trabajo definidos en el proyecto y carga automáticamente variables de entorno desde el archivo `.env`.
##### Mac
```bash
# Instala el gestor de paquetes uv si no lo tienes
curl -LsSf https://astral.sh/uv/install.sh | sh
# Instala dependencias e inicia el servidor LangGraph
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.12 langgraph dev --allow-blocking
```
##### Windows / Linux
```bash
# Instalar dependencias
pip install -e .
pip install -U "langgraph-cli[inmem]"
# Iniciar el servidor LangGraph
langgraph dev
```
Después de iniciar el servidor LangGraph, verás varias URLs en la terminal:
- API: http://127.0.0.1:2024
- UI de Studio: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- Docs de API: http://127.0.0.1:2024/docs
Abre el enlace de UI de Studio en tu navegador para acceder a la interfaz de depuración.
#### Usando LangGraph Studio
En la UI de Studio, puedes:
1. Visualizar el grafo de flujo de trabajo y ver cómo se conectan los componentes
2. Rastrear la ejecución en tiempo real para ver cómo fluyen los datos a través del sistema
3. Inspeccionar el estado en cada paso del flujo de trabajo
4. Depurar problemas examinando entradas y salidas de cada componente
5. Proporcionar retroalimentación durante la fase de planificación para refinar planes de investigación
Cuando envías un tema de investigación en la UI de Studio, podrás ver toda la ejecución del flujo de trabajo, incluyendo:
- La fase de planificación donde se crea el plan de investigación
- El bucle de retroalimentación donde puedes modificar el plan
- Las fases de investigación y escritura para cada sección
- La generación del informe final
### Habilitando el Rastreo de LangSmith
DeerFlow soporta el rastreo de LangSmith para ayudarte a depurar y monitorear tus flujos de trabajo. Para habilitar el rastreo de LangSmith:
1. Asegúrate de que tu archivo `.env` tenga las siguientes configuraciones (ver `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="xxx"
LANGSMITH_PROJECT="xxx"
```
2. Inicia el rastreo y visualiza el grafo localmente con LangSmith ejecutando:
```bash
langgraph dev
```
Esto habilitará la visualización de rastros en LangGraph Studio y enviará tus rastros a LangSmith para monitoreo y análisis.
## Docker
También puedes ejecutar este proyecto con Docker.
Primero, necesitas leer la [configuración](docs/configuration_guide.md) a continuación. Asegúrate de que los archivos `.env` y `.conf.yaml` estén listos.
Segundo, para construir una imagen Docker de tu propio servidor web:
```bash
docker build -t deer-flow-api .
```
Finalmente, inicia un contenedor Docker que ejecute el servidor web:
```bash
# Reemplaza deer-flow-api-app con tu nombre de contenedor preferido
docker run -d -t -p 8000:8000 --env-file .env --name deer-flow-api-app deer-flow-api
# detener el servidor
docker stop deer-flow-api-app
```
### Docker Compose (incluye tanto backend como frontend)
DeerFlow proporciona una configuración docker-compose para ejecutar fácilmente tanto el backend como el frontend juntos:
```bash
# construir imagen docker
docker compose build
# iniciar el servidor
docker compose up
```
## Ejemplos
Los siguientes ejemplos demuestran las capacidades de DeerFlow:
### Informes de Investigación
1. **Informe sobre OpenAI Sora** - Análisis de la herramienta IA Sora de OpenAI
- Discute características, acceso, ingeniería de prompts, limitaciones y consideraciones éticas
- [Ver informe completo](examples/openai_sora_report.md)
2. **Informe sobre el Protocolo Agent to Agent de Google** - Visión general del protocolo Agent to Agent (A2A) de Google
- Discute su papel en la comunicación de agentes IA y su relación con el Model Context Protocol (MCP) de Anthropic
- [Ver informe completo](examples/what_is_agent_to_agent_protocol.md)
3. **¿Qué es MCP?** - Un análisis completo del término "MCP" en múltiples contextos
- Explora Model Context Protocol en IA, Fosfato Monocálcico en química y Placa de Microcanales en electrónica
- [Ver informe completo](examples/what_is_mcp.md)
4. **Fluctuaciones del Precio de Bitcoin** - Análisis de los movimientos recientes del precio de Bitcoin
- Examina tendencias del mercado, influencias regulatorias e indicadores técnicos
- Proporciona recomendaciones basadas en datos históricos
- [Ver informe completo](examples/bitcoin_price_fluctuation.md)
5. **¿Qué es LLM?** - Una exploración en profundidad de los Modelos de Lenguaje Grandes
- Discute arquitectura, entrenamiento, aplicaciones y consideraciones éticas
- [Ver informe completo](examples/what_is_llm.md)
6. **¿Cómo usar Claude para Investigación Profunda?** - Mejores prácticas y flujos de trabajo para usar Claude en investigación profunda
- Cubre ingeniería de prompts, análisis de datos e integración con otras herramientas
- [Ver informe completo](examples/how_to_use_claude_deep_research.md)
7. **Adopción de IA en Salud: Factores de Influencia** - Análisis de factores que impulsan la adopción de IA en salud
- Discute tecnologías IA, calidad de datos, consideraciones éticas, evaluaciones económicas, preparación organizativa e infraestructura digital
- [Ver informe completo](examples/AI_adoption_in_healthcare.md)
8. **Impacto de la Computación Cuántica en la Criptografía** - Análisis del impacto de la computación cuántica en la criptografía
- Discute vulnerabilidades de la criptografía clásica, criptografía post-cuántica y soluciones criptográficas resistentes a la cuántica
- [Ver informe completo](examples/Quantum_Computing_Impact_on_Cryptography.md)
9. **Aspectos Destacados del Rendimiento de Cristiano Ronaldo** - Análisis de los aspectos destacados del rendimiento de Cristiano Ronaldo
- Discute sus logros profesionales, goles internacionales y rendimiento en varios partidos
- [Ver informe completo](examples/Cristiano_Ronaldo's_Performance_Highlights.md)
Para ejecutar estos ejemplos o crear tus propios informes de investigación, puedes usar los siguientes comandos:
```bash
# Ejecutar con una consulta específica
uv run main.py "¿Qué factores están influyendo en la adopción de IA en salud?"
# Ejecutar con parámetros de planificación personalizados
uv run main.py --max_plan_iterations 3 "¿Cómo impacta la computación cuántica en la criptografía?"
# Ejecutar en modo interactivo con preguntas integradas
uv run main.py --interactive
# O ejecutar con prompt interactivo básico
uv run main.py
# Ver todas las opciones disponibles
uv run main.py --help
```
### Modo Interactivo
La aplicación ahora soporta un modo interactivo con preguntas integradas tanto en inglés como en chino:
1. Lanza el modo interactivo:
```bash
uv run main.py --interactive
```
2. Selecciona tu idioma preferido (English o 中文)
3. Elige de una lista de preguntas integradas o selecciona la opción para hacer tu propia pregunta
4. El sistema procesará tu pregunta y generará un informe de investigación completo
### Humano en el Bucle
DeerFlow incluye un mecanismo de humano en el bucle que te permite revisar, editar y aprobar planes de investigación antes de que sean ejecutados:
1. **Revisión del Plan**: Cuando el humano en el bucle está habilitado, el sistema presentará el plan de investigación generado para tu revisión antes de la ejecución
2. **Proporcionando Retroalimentación**: Puedes:
- Aceptar el plan respondiendo con `[ACCEPTED]`
- Editar el plan proporcionando retroalimentación (p.ej., `[EDIT PLAN] Añadir más pasos sobre implementación técnica`)
- El sistema incorporará tu retroalimentación y generará un plan revisado
3. **Auto-aceptación**: Puedes habilitar la auto-aceptación para omitir el proceso de revisión:
- Vía API: Establece `auto_accepted_plan: true` en tu solicitud
4. **Integración API**: Cuando uses la API, puedes proporcionar retroalimentación a través del parámetro `feedback`:
```json
{
"messages": [{ "role": "user", "content": "¿Qué es la computación cuántica?" }],
"thread_id": "my_thread_id",
"auto_accepted_plan": false,
"feedback": "[EDIT PLAN] Incluir más sobre algoritmos cuánticos"
}
```
### Argumentos de Línea de Comandos
La aplicación soporta varios argumentos de línea de comandos para personalizar su comportamiento:
- **query**: La consulta de investigación a procesar (puede ser múltiples palabras)
- **--interactive**: Ejecutar en modo interactivo con preguntas integradas
- **--max_plan_iterations**: Número máximo de ciclos de planificación (predeterminado: 1)
- **--max_step_num**: Número máximo de pasos en un plan de investigación (predeterminado: 3)
- **--debug**: Habilitar registro detallado de depuración
## Preguntas Frecuentes
Por favor, consulta [FAQ.md](docs/FAQ.md) para más detalles.
## Licencia
Este proyecto es de código abierto y está disponible bajo la [Licencia MIT](./LICENSE).
## Agradecimientos
DeerFlow está construido sobre el increíble trabajo de la comunidad de código abierto. Estamos profundamente agradecidos a todos los proyectos y contribuyentes cuyos esfuerzos han hecho posible DeerFlow. Verdaderamente, nos apoyamos en hombros de gigantes.
Nos gustaría extender nuestro sincero agradecimiento a los siguientes proyectos por sus invaluables contribuciones:
- **[LangChain](https://github.com/langchain-ai/langchain)**: Su excepcional marco impulsa nuestras interacciones y cadenas LLM, permitiendo integración y funcionalidad sin problemas.
- **[LangGraph](https://github.com/langchain-ai/langgraph)**: Su enfoque innovador para la orquestación multi-agente ha sido instrumental en permitir los sofisticados flujos de trabajo de DeerFlow.
Estos proyectos ejemplifican el poder transformador de la colaboración de código abierto, y estamos orgullosos de construir sobre sus cimientos.
### Contribuyentes Clave
Un sentido agradecimiento va para los autores principales de `DeerFlow`, cuya visión, pasión y dedicación han dado vida a este proyecto:
- **[Daniel Walnut](https://github.com/hetaoBackend/)**
- **[Henry Li](https://github.com/magiccube/)**
Su compromiso inquebrantable y experiencia han sido la fuerza impulsora detrás del éxito de DeerFlow. Nos sentimos honrados de tenerlos al timón de este viaje.
## Historial de Estrellas
[![Gráfico de Historial de Estrellas](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
+20 -1
View File
@@ -3,7 +3,7 @@
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md)
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) | [Portuguese](./README_pt.md)
> オープンソースから生まれ、オープンソースに還元する。
@@ -322,6 +322,25 @@ Studio UI で研究トピックを送信すると、次を含む全ワークフ
- 各セクションの研究と執筆段階
- 最終レポート生成
### LangSmith トレースの有効化
DeerFlow は LangSmith トレース機能をサポートしており、ワークフローのデバッグとモニタリングに役立ちます。LangSmith トレースを有効にするには:
1. `.env` ファイルに次の設定があることを確認してください(`.env.example` を参照):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="xxx"
LANGSMITH_PROJECT="xxx"
```
2. 次のコマンドを実行して LangSmith トレースを開始します:
```bash
langgraph dev
```
これにより、LangGraph Studio でトレース可視化が有効になり、トレースがモニタリングと分析のために LangSmith に送信されます。
## Docker
このプロジェクトは Docker でも実行できます。
+545
View File
@@ -0,0 +1,545 @@
# 🦌 DeerFlow
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McDcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
<!-- DeepWiki badge generated by https://deepwiki.ryoppippi.com/ -->
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) | [Portuguese](./README_pt.md)
> Originado do Open Source, de volta ao Open Source
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) é um framework de Pesquisa Profunda orientado-a-comunidade que baseia-se em um íncrivel trabalho da comunidade open source. Nosso objetivo é combinar modelos de linguagem com ferramentas especializadas para tarefas como busca na web, crawling, e execução de código Python, enquanto retribui com a comunidade que o tornou possível.
Por favor, visite [Nosso Site Oficial](https://deerflow.tech/) para maiores detalhes.
## Demo
### Video
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
Nesse demo, nós demonstramos como usar o DeerFlow para:
In this demo, we showcase how to use DeerFlow to:
- Integração fácil com serviços MCP
- Conduzir o processo de Pesquisa Profunda e produzir um relatório abrangente com imagens
- Criar um áudio podcast baseado no relatório gerado
### Replays
- [Quão alta é a Torre Eiffel comparada ao prédio mais alto?](https://deerflow.tech/chat?replay=eiffel-tower-vs-tallest-building)
- [Quais são os top repositórios tendência no GitHub?](https://deerflow.tech/chat?replay=github-top-trending-repo)
- [Escreva um artigo sobre os pratos tradicionais de Nanjing's](https://deerflow.tech/chat?replay=nanjing-traditional-dishes)
- [Como decorar um apartamento alugado?](https://deerflow.tech/chat?replay=rental-apartment-decoration)
- [Visite nosso site oficial para explorar mais replays.](https://deerflow.tech/#case-studies)
---
## 📑 Tabela de Conteúdos
- [🚀 Início Rápido](#Início-Rápido)
- [🌟 Funcionalidades](#funcionalidades)
- [🏗️ Arquitetura](#arquitetura)
- [🛠️ Desenvolvimento](#desenvolvimento)
- [🐳 Docker](#docker)
- [🗣️ Texto-para-fala Integração](#texto-para-fala-integração)
- [📚 Exemplos](#exemplos)
- [❓ FAQ](#faq)
- [📜 Licença](#licença)
- [💖 Agradecimentos](#agradecimentos)
- [🏆 Contribuidores-Chave](#contribuidores-chave)
- [⭐ Histórico de Estrelas](#Histórico-Estrelas)
## Início-Rápido
DeerFlow é desenvolvido em Python, e vem com uma IU web escrita em Node.js. Para garantir um processo de configuração fácil, nós recomendamos o uso das seguintes ferramentas:
### Ferramentas Recomendadas
- **[`uv`](https://docs.astral.sh/uv/getting-started/installation/):**
Simplifica o gerenciamento de dependência de ambientes Python. `uv` automaticamente cria um ambiente virtual no diretório raiz e instala todos os pacotes necessários para não haver a necessidade de instalar ambientes Python manualmente
- **[`nvm`](https://github.com/nvm-sh/nvm):**
Gerencia múltiplas versões do ambiente de execução do Node.js sem esforço.
- **[`pnpm`](https://pnpm.io/installation):**
Instala e gerencia dependências do projeto Node.js.
### Requisitos de Ambiente
Certifique-se de que seu sistema atenda os seguintes requisitos mínimos:
- **[Python](https://www.python.org/downloads/):** Versão `3.12+`
- **[Node.js](https://nodejs.org/en/download/):** Versão `22+`
### Instalação
```bash
# Clone o repositório
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
# Instale as dependências, uv irá lidar com o interpretador do python e a criação do venv, e instalar os pacotes necessários
uv sync
# Configure .env com suas chaves de API
# Tavily: https://app.tavily.com/home
# Brave_SEARCH: https://brave.com/search/api/
# volcengine TTS: Adicione sua credencial TTS caso você a possua
cp .env.example .env
# Veja as seções abaixo 'Supported Search Engines' and 'Texto-para-Fala Integração' para todas as opções disponíveis
# Configure o conf.yaml para o seu modelo LLM e chaves API
# Por favor, consulte 'docs/configuration_guide.md' para maiores detalhes
cp conf.yaml.example conf.yaml
# Instale marp para geração de ppt
# https://github.com/marp-team/marp-cli?tab=readme-ov-file#use-package-manager
brew install marp-cli
```
Opcionalmente, instale as dependências IU web via [pnpm](https://pnpm.io/installation):
```bash
cd deer-flow/web
pnpm install
```
### Configurações
Por favor, consulte o [Guia de Configuração](docs/configuration_guide.md) para maiores detalhes.
> [!NOTA]
> Antes de iniciar o projeto, leia o guia detalhadamente, e atualize as configurações para baterem com os seus requisitos e configurações específicas.
### Console IU
A maneira mais rápida de rodar o projeto é usar o console IU.
```bash
# Execute o projeto em um shell tipo-bash
uv run main.py
```
### Web IU
Esse projeto também inclui uma IU Web, trazendo uma experiência mais interativa, dinâmica e engajadora.
> [!NOTA]
> Você precisa instalar as dependências do IU web primeiro.
```bash
# Execute ambos os servidores de backend e frontend em modo desenvolvimento
# No macOS/Linux
./bootstrap.sh -d
# No Windows
bootstrap.bat -d
```
Abra seu navegador e visite [`http://localhost:3000`](http://localhost:3000) para explorar a IU web.
Explore mais detalhes no diretório [`web`](./web/) .
## Mecanismos de Busca Suportados
DeerFlow suporta múltiplos mecanismos de busca que podem ser configurados no seu arquivo `.env` usando a variável `SEARCH_API`:
- **Tavily** (padrão): Uma API de busca especializada para aplicações de IA
- Requer `TAVILY_API_KEY` no seu arquivo `.env`
- Inscreva-se em: https://app.tavily.com/home
- **DuckDuckGo**: Mecanismo de busca focado em privacidade
- Não requer chave API
- **Brave Search**: Mecanismo de busca focado em privacidade com funcionalidades avançadas
- Requer `BRAVE_SEARCH_API_KEY` no seu arquivo `.env`
- Inscreva-se em: https://brave.com/search/api/
- **Arxiv**: Busca de artigos científicos para pesquisa acadêmica
- Não requer chave API
- Especializado em artigos científicos e acadêmicos
Para configurar o seu mecanismo preferido, defina a variável `SEARCH_API` no seu arquivo:
```bash
# Escolha uma: tavily, duckduckgo, brave_search, arxiv
SEARCH_API=tavily
```
## Funcionalidades
### Principais Funcionalidades
- 🤖 **Integração LLM**
- Suporta a integração da maioria dos modelos através de [litellm](https://docs.litellm.ai/docs/providers).
- Suporte a modelos open source como Qwen
- Interface API compatível com a OpenAI
- Sistema LLM multicamadas para diferentes complexidades de tarefa
### Ferramentas e Integrações MCP
- 🔍 **Busca e Recuperação**
- Busca web com Tavily, Brave Search e mais
- Crawling com Jina
- Extração de Conteúdo avançada
- 🔗 **Integração MCP perfeita**
- Expansão de capacidades de acesso para acesso a domínios privados, grafo de conhecimento, navegação web e mais
- Integração facilitdade de diversas ferramentas de pesquisa e metodologias
### Colaboração Humana
- 🧠 **Humano-no-processo**
- Suporta modificação interativa de planos de pesquisa usando linguagem natural
- Suporta auto-aceite de planos de pesquisa
- 📝 **Relatório Pós-Edição**
- Suporta edição de edição de blocos estilo Notion
- Permite refinamentos de IA, incluindo polimento de IA assistida, encurtamento de frase, e expansão
- Distribuído por [tiptap](https://tiptap.dev/)
### Criação de Conteúdo
- 🎙️ **Geração de Podcast e apresentação**
- Script de geração de podcast e síntese de áudio movido por IA
- Criação automatizada de apresentações PowerPoint simples
- Templates customizáveis para conteúdo personalizado
## Arquitetura
DeerFlow implementa uma arquitetura de sistema multi-agente modular designada para pesquisa e análise de código automatizada. O sistema é construído em LangGraph, possibilitando um fluxo de trabalho flexível baseado-em-estado onde os componentes se comunicam através de um sistema de transmissão de mensagens bem-definido.
![Diagrama de Arquitetura](./assets/architecture.png)
> Veja ao vivo em [deerflow.tech](https://deerflow.tech/#multi-agent-architecture)
O sistema emprega um fluxo de trabalho simplificado com os seguintes componentes:
1. **Coordenador**: O ponto de entrada que gerencia o ciclo de vida do fluxo de trabalho
- Inicia o processo de pesquisa baseado na entrada do usuário
- Delega tarefas so planejador quando apropriado
- Atua como a interface primária entre o usuário e o sistema
2. **Planejador**: Componente estratégico para a decomposição e planejamento
- Analisa objetivos de pesquisa e cria planos de execução estruturados
- Determina se há contexto suficiente disponível ou se mais pesquisa é necessária
- Gerencia o fluxo de pesquisa e decide quando gerar o relatório final
3. **Time de Pesquisa**: Uma coleção de agentes especializados que executam o plano:
- **Pesquisador**: Conduz buscas web e coleta informações utilizando ferramentas como mecanismos de busca web, crawling e mesmo serviços MCP.
- **Programador**: Lida com a análise de código, execução e tarefas técnicas como usar a ferramenta Python REPL.
Cada agente tem acesso à ferramentas específicas otimizadas para seu papel e opera dentro do fluxo de trabalho LangGraph.
4. **Repórter**: Estágio final do processador de estágio para saídas de pesquisa
- Resultados agregados do time de pesquisa
- Processa e estrutura as informações coletadas
- Gera relatórios abrangentes de pesquisas
## Texto-para-Fala Integração
DeerFlow agora inclui uma funcionalidade Texto-para-Fala (TTS) que permite que você converta relatórios de busca para voz. Essa funcionalidade usa o mecanismo de voz da API TTS para gerar áudio de alta qualidade a partir do texto. Funcionalidades como velocidade, volume e tom também são customizáveis.
### Usando a API TTS
Você pode acessar a funcionalidade TTS através do endpoint `/api/tts`:
```bash
# Exemplo de chamada da API usando curl
curl --location 'http://localhost:8000/api/tts' \
--header 'Content-Type: application/json' \
--data '{
"text": "This is a test of the text-to-speech functionality.",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0
}' \
--output speech.mp3
```
## Desenvolvimento
### Testando
Rode o conjunto de testes:
```bash
# Roda todos os testes
make test
# Roda um arquivo de teste específico
pytest tests/integration/test_workflow.py
# Roda com coverage
make coverage
```
### Qualidade de Código
```bash
# Roda o linting
make lint
# Formata de código
make format
```
### Debugando com o LangGraph Studio
DeerFlow usa LangGraph para sua arquitetura de fluxo de trabalho. Nós podemos usar o LangGraph Studio para debugar e visualizar o fluxo de trabalho em tempo real.
#### Rodando o LangGraph Studio Localmente
DeerFlow inclui um arquivo de configuração `langgraph.json` que define a estrutura do grafo e dependências para o LangGraph Studio. Esse arquivo aponta para o grafo do fluxo de trabalho definido no projeto e automaticamente carrega as variáveis de ambiente do arquivo `.env`.
##### Mac
```bash
# Instala o gerenciador de pacote uv caso você não o possua
curl -LsSf https://astral.sh/uv/install.sh | sh
# Instala as dependências e inicia o servidor LangGraph
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.12 langgraph dev --allow-blocking
```
##### Windows / Linux
```bash
# Instala as dependências
pip install -e .
pip install -U "langgraph-cli[inmem]"
# Inicia o servidor LangGraph
langgraph dev
```
Após iniciar o servidor LangGraph, você verá diversas URLs no seu terminal:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API Docs: http://127.0.0.1:2024/docs
Abra o link do Studio UI no seu navegador para acessar a interface de depuração.
#### Usando o LangGraph Studio
No Studio UI, você pode:
1. Visualizar o grafo do fluxo de trabalho e como seus componentes se conectam
2. Rastrear a execução em tempo-real e ver como os dados fluem através do sistema
3. Inspecionar o estado de cada passo do fluxo de trabalho
4. Depurar problemas ao examinar entradas e saídas de cada componente
5. Coletar feedback durante a fase de planejamento para refinar os planos de pesquisa
Quando você envia um tópico de pesquisa ao Studio UI, você será capaz de ver toda a execução do fluxo de trabalho, incluindo:
- A fase de planejamento onde o plano de pesquisa foi criado
- O processo de feedback onde você pode modificar o plano
- As fases de pesquisa e escrita de cada seção
- A geração do relatório final
## Docker
Você também pode executar esse projeto via Docker.
Primeiro, voce deve ler a [configuração](#configuration) below. Make sure `.env`, `.conf.yaml` files are ready.
Segundo, para fazer o build de sua imagem docker em seu próprio servidor:
```bash
docker build -t deer-flow-api .
```
E por fim, inicie um container docker rodando o servidor web:
```bash
# substitua deer-flow-api-app com seu nome de container preferido
docker run -d -t -p 8000:8000 --env-file .env --name deer-flow-api-app deer-flow-api
# pare o servidor
docker stop deer-flow-api-app
```
### Docker Compose (inclui ambos backend e frontend)
DeerFlow fornece uma estrutura docker-compose para facilmente executar ambos o backend e frontend juntos:
```bash
# building docker image
docker compose build
# start the server
docker compose up
```
## Exemplos:
Os seguintes exemplos demonstram as capacidades do DeerFlow:
### Relatórios de Pesquisa
1. **Relatório OpenAI Sora** - Análise da ferramenta Sora da OpenAI
- Discute funcionalidades, acesso, engenharia de prompt, limitações e considerações éticas
- [Veja o relatório completo](examples/openai_sora_report.md)
2. **Relatório Protocolo Agent-to-Agent do Google** - Visão geral do protocolo Agent-to-Agent (A2A) do Google
- Discute o seu papel na comunicação de Agente de IA e seu relacionamento com o Protocolo de Contexto de Modelo ( MCP ) da Anthropic
- [Veja o relatório completo](examples/what_is_agent_to_agent_protocol.md)
3. **O que é MCP?** - Uma análise abrangente to termo "MCP" através de múltiplos contextos
- Explora o Protocolo de Contexto de Modelo em IA, Fosfato Monocálcio em Química, e placa de microcanal em eletrônica
- [Veja o relatório completo](examples/what_is_mcp.md)
4. **Bitcoin Price Fluctuations** - Análise das recentes movimentações de preço do Bitcoin
- Examina tendências de mercado, influências regulatórias, e indicadores técnicos
- Fornece recomendações baseadas nos dados históricos
- [Veja o relatório completo](examples/bitcoin_price_fluctuation.md)
5. **O que é LLM?** - Uma exploração em profundidade de Large Language Models
- Discute arquitetura, treinamento, aplicações, e considerações éticas
- [Veja o relatório completo](examples/what_is_llm.md)
6. **Como usar Claude para Pesquisa Aprofundada?** - Melhores práticas e fluxos de trabalho para usar Claude em pesquisa aprofundada
- Cobre engenharia de prompt, análise de dados, e integração com outras ferramentas
- [Veja o relatório completo](examples/how_to_use_claude_deep_research.md)
7. **Adoção de IA na Área da Saúde: Fatores de Influência** - Análise dos fatores que levam à adoção de IA na área da saúde
- Discute tecnologias de IA, qualidade de dados, considerações éticas, avaliações econômicas, prontidão organizacional, e infraestrutura digital
- [Veja o relatório completo](examples/AI_adoption_in_healthcare.md)
8. **Impacto da Computação Quântica em Criptografia** - Análise dos impactos da computação quântica em criptografia
- Discture vulnerabilidades da criptografia clássica, criptografia pós-quântica, e soluções criptográficas de resistência-quântica
- [Veja o relatório completo](examples/Quantum_Computing_Impact_on_Cryptography.md)
9. **Destaques da Performance do Cristiano Ronaldo** - Análise dos destaques da performance do Cristiano Ronaldo
- Discute as suas conquistas de carreira, objetivos internacionais, e performance em diversas partidas
- [Veja o relatório completo](examples/Cristiano_Ronaldo's_Performance_Highlights.md)
Para executar esses exemplos ou criar seus próprios relatórios de pesquisa, você deve utilizar os seguintes comandos:
```bash
# Executa com uma consulta específica
uv run main.py "Quais fatores estão influenciando a adoção de IA na área da saúde?"
# Executa com parâmetros de planejamento customizados
uv run main.py --max_plan_iterations 3 "Como a computação quântica impacta na criptografia?"
# Executa em modo interativo com questões embutidas
uv run main.py --interactive
# Ou executa com um prompt interativo básico
uv run main.py
# Vê todas as opções disponíveis
uv run main.py --help
```
### Modo Interativo
A aplicação agora suporta um modo interativo com questões embutidas tanto em Inglês quanto Chinês:
1. Inicie o modo interativo:
```bash
uv run main.py --interactive
```
2. Selecione sua linguagem de preferência (English or 中文)
3. Escolha uma das questões embutidas da lista ou selecione a opção para perguntar sua própria questão
4. O sistema irá processar sua questão e gerar um relatório abrangente de pesquisa
### Humano no processo
DeerFlow inclue um mecanismo de humano no processo que permite a você revisar, editar e aprovar planos de pesquisa antes que estes sejam executados:
1. **Revisão de Plano**: Quando o humano no processo está habilitado, o sistema irá apresentar o plano de pesquisa gerado para sua revisão antes da execução
2. **Fornecimento de Feedback**: Você pode:
- Aceitar o plano respondendo com `[ACCEPTED]`
- Edite o plano fornecendo feedback (e.g., `[EDIT PLAN] Adicione mais passos sobre a implementação técnica`)
- O sistema irá incorporar seu feedback e gerar um plano revisado
3. **Auto-aceite**: Você pode habilitar o auto-aceite ou pular o processo de revisão:
- Via API: Defina `auto_accepted_plan: true` na sua requisição
4. **Integração de API**: Quanto usar a API, você pode fornecer um feedback através do parâmetro `feedback`:
```json
{
"messages": [{ "role": "user", "content": "O que é computação quântica?" }],
"thread_id": "my_thread_id",
"auto_accepted_plan": false,
"feedback": "[EDIT PLAN] Inclua mais sobre algoritmos quânticos"
}
```
### Argumentos via Linha de Comando
A aplicação suporta diversos argumentos via linha de comando para customizar o seu comportamento:
- **consulta**: A consulta de pesquisa a ser processada (podem ser múltiplas palavras)
- **--interativo**: Roda no modo interativo com questões embutidas
- **--max_plan_iterations**: Número máximo de ciclos de planejamento (padrão: 1)
- **--max_step_num**: Número máximo de passos em um plano de pesquisa (padrão: 3)
- **--debug**: Habilita Enable um log de depuração detalhado
## FAQ
Por favor consulte a [FAQ.md](docs/FAQ.md) para maiores detalhes.
## Licença
Esse projeto é open source e disponível sob a [MIT License](./LICENSE).
## Agradecimentos
DeerFlow é construído através do incrível trabalho da comunidade open-source. Nós somos profundamente gratos a todos os projetos e contribuidores cujos esforços tornaram o DeerFlow possível. Realmente, nós estamos apoiados nos ombros de gigantes.
Nós gostaríamos de extender nossos sinceros agradecimentos aos seguintes projetos por suas invaloráveis contribuições:
- **[LangChain](https://github.com/langchain-ai/langchain)**: O framework excepcional deles empodera nossas interações via LLM e correntes, permitindo uma integração perfeita e funcional.
- **[LangGraph](https://github.com/langchain-ai/langgraph)**: A abordagem inovativa para orquestração multi-agente deles tem sido foi fundamental em permitir o acesso dos fluxos de trabalho sofisticados do DeerFlow.
Esses projetos exemplificam o poder transformador da colaboração open-source, e nós temos orgulho de construir baseado em suas fundações.
### Contribuidores-Chave
Um sincero muito obrigado vai para os principais autores do `DeerFlow`, cuja visão, paixão, e dedicação trouxe esse projeto à vida:
- **[Daniel Walnut](https://github.com/hetaoBackend/)**
- **[Henry Li](https://github.com/magiccube/)**
O seu compromisso inabalável e experiência tem sido a força por trás do sucesso do DeerFlow. Nós estamos honrados em tê-los no comando dessa trajetória.
## Histórico-Estrelas
[![Gráfico do Histórico de Estrelas](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
+554
View File
@@ -0,0 +1,554 @@
# 🦌 DeerFlow
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![DeepWiki](https://img.shields.io/badge/DeepWiki-bytedance%2Fdeer--flow-blue.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAyCAYAAAAnWDnqAAAAAXNSR0IArs4c6QAAA05JREFUaEPtmUtyEzEQhtWTQyQLHNak2AB7ZnyXZMEjXMGeK/AIi+QuHrMnbChYY7MIh8g01fJoopFb0uhhEqqcbWTp06/uv1saEDv4O3n3dV60RfP947Mm9/SQc0ICFQgzfc4CYZoTPAswgSJCCUJUnAAoRHOAUOcATwbmVLWdGoH//PB8mnKqScAhsD0kYP3j/Yt5LPQe2KvcXmGvRHcDnpxfL2zOYJ1mFwrryWTz0advv1Ut4CJgf5uhDuDj5eUcAUoahrdY/56ebRWeraTjMt/00Sh3UDtjgHtQNHwcRGOC98BJEAEymycmYcWwOprTgcB6VZ5JK5TAJ+fXGLBm3FDAmn6oPPjR4rKCAoJCal2eAiQp2x0vxTPB3ALO2CRkwmDy5WohzBDwSEFKRwPbknEggCPB/imwrycgxX2NzoMCHhPkDwqYMr9tRcP5qNrMZHkVnOjRMWwLCcr8ohBVb1OMjxLwGCvjTikrsBOiA6fNyCrm8V1rP93iVPpwaE+gO0SsWmPiXB+jikdf6SizrT5qKasx5j8ABbHpFTx+vFXp9EnYQmLx02h1QTTrl6eDqxLnGjporxl3NL3agEvXdT0WmEost648sQOYAeJS9Q7bfUVoMGnjo4AZdUMQku50McCcMWcBPvr0SzbTAFDfvJqwLzgxwATnCgnp4wDl6Aa+Ax283gghmj+vj7feE2KBBRMW3FzOpLOADl0Isb5587h/U4gGvkt5v60Z1VLG8BhYjbzRwyQZemwAd6cCR5/XFWLYZRIMpX39AR0tjaGGiGzLVyhse5C9RKC6ai42ppWPKiBagOvaYk8lO7DajerabOZP46Lby5wKjw1HCRx7p9sVMOWGzb/vA1hwiWc6jm3MvQDTogQkiqIhJV0nBQBTU+3okKCFDy9WwferkHjtxib7t3xIUQtHxnIwtx4mpg26/HfwVNVDb4oI9RHmx5WGelRVlrtiw43zboCLaxv46AZeB3IlTkwouebTr1y2NjSpHz68WNFjHvupy3q8TFn3Hos2IAk4Ju5dCo8B3wP7VPr/FGaKiG+T+v+TQqIrOqMTL1VdWV1DdmcbO8KXBz6esmYWYKPwDL5b5FA1a0hwapHiom0r/cKaoqr+27/XcrS5UwSMbQAAAABJRU5ErkJggg==)](https://deepwiki.com/bytedance/deer-flow)
<!-- DeepWiki badge generated by https://deepwiki.ryoppippi.com/ -->
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) | [Portuguese](./README_pt.md)
> Создано на базе открытого кода, возвращено в открытый код.
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) - это фреймворк для глубокого исследования, разработанный сообществом и основанный на впечатляющей работе сообщества открытого кода. Наша цель - объединить языковые модели со специализированными инструментами для таких задач, как веб-поиск, сканирование и выполнение кода Python, одновременно возвращая пользу сообществу, которое сделало это возможным.
Пожалуйста, посетите [наш официальный сайт](https://deerflow.tech/) для получения дополнительной информации.
## Демонстрация
### Видео
https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
В этой демонстрации мы показываем, как использовать DeerFlow для:
- Бесшовной интеграции с сервисами MCP
- Проведения процесса глубокого исследования и создания комплексного отчета с изображениями
- Создания аудио подкаста на основе сгенерированного отчета
### Повторы
- [Какова высота Эйфелевой башни по сравнению с самым высоким зданием?](https://deerflow.tech/chat?replay=eiffel-tower-vs-tallest-building)
- [Какие репозитории самые популярные на GitHub?](https://deerflow.tech/chat?replay=github-top-trending-repo)
- [Написать статью о традиционных блюдах Нанкина](https://deerflow.tech/chat?replay=nanjing-traditional-dishes)
- [Как украсить съемную квартиру?](https://deerflow.tech/chat?replay=rental-apartment-decoration)
- [Посетите наш официальный сайт, чтобы изучить больше повторов.](https://deerflow.tech/#case-studies)
---
## 📑 Оглавление
- [🚀 Быстрый старт](#быстрый-старт)
- [🌟 Особенности](#особенности)
- [🏗️ Архитектура](#архитектура)
- [🛠️ Разработка](#разработка)
- [🐳 Docker](#docker)
- [🗣️ Интеграция преобразования текста в речь](#интеграция-преобразования-текста-в-речь)
- [📚 Примеры](#примеры)
- [❓ FAQ](#faq)
- [📜 Лицензия](#лицензия)
- [💖 Благодарности](#благодарности)
- [⭐ История звезд](#история-звезд)
## Быстрый старт
DeerFlow разработан на Python и поставляется с веб-интерфейсом, написанным на Node.js. Для обеспечения плавного процесса настройки мы рекомендуем использовать следующие инструменты:
### Рекомендуемые инструменты
- **[`uv`](https://docs.astral.sh/uv/getting-started/installation/):**
Упрощает управление средой Python и зависимостями. `uv` автоматически создает виртуальную среду в корневом каталоге и устанавливает все необходимые пакеты за вас—без необходимости вручную устанавливать среды Python.
- **[`nvm`](https://github.com/nvm-sh/nvm):**
Легко управляйте несколькими версиями среды выполнения Node.js.
- **[`pnpm`](https://pnpm.io/installation):**
Установка и управление зависимостями проекта Node.js.
### Требования к среде
Убедитесь, что ваша система соответствует следующим минимальным требованиям:
- **[Python](https://www.python.org/downloads/):** Версия `3.12+`
- **[Node.js](https://nodejs.org/en/download/):** Версия `22+`
### Установка
```bash
# Клонировать репозиторий
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
# Установить зависимости, uv позаботится об интерпретаторе python и создании venv, и установит необходимые пакеты
uv sync
# Настроить .env с вашими API-ключами
# Tavily: https://app.tavily.com/home
# Brave_SEARCH: https://brave.com/search/api/
# volcengine TTS: Добавьте ваши учетные данные TTS, если они у вас есть
cp .env.example .env
# См. разделы 'Поддерживаемые поисковые системы' и 'Интеграция преобразования текста в речь' ниже для всех доступных опций
# Настроить conf.yaml для вашей модели LLM и API-ключей
# Пожалуйста, обратитесь к 'docs/configuration_guide.md' для получения дополнительной информации
cp conf.yaml.example conf.yaml
# Установить marp для генерации презентаций
# https://github.com/marp-team/marp-cli?tab=readme-ov-file#use-package-manager
brew install marp-cli
```
По желанию установите зависимости веб-интерфейса через [pnpm](https://pnpm.io/installation):
```bash
cd deer-flow/web
pnpm install
```
### Конфигурации
Пожалуйста, обратитесь к [Руководству по конфигурации](docs/configuration_guide.md) для получения дополнительной информации.
> [!ПРИМЕЧАНИЕ]
> Прежде чем запустить проект, внимательно прочитайте руководство и обновите конфигурации в соответствии с вашими конкретными настройками и требованиями.
### Консольный интерфейс
Самый быстрый способ запустить проект - использовать консольный интерфейс.
```bash
# Запустить проект в оболочке, похожей на bash
uv run main.py
```
### Веб-интерфейс
Этот проект также включает веб-интерфейс, предлагающий более динамичный и привлекательный интерактивный опыт.
> [!ПРИМЕЧАНИЕ]
> Сначала вам нужно установить зависимости веб-интерфейса.
```bash
# Запустить оба сервера, бэкенд и фронтенд, в режиме разработки
# На macOS/Linux
./bootstrap.sh -d
# На Windows
bootstrap.bat -d
```
Откройте ваш браузер и посетите [`http://localhost:3000`](http://localhost:3000), чтобы исследовать веб-интерфейс.
Исследуйте больше деталей в каталоге [`web`](./web/).
## Поддерживаемые поисковые системы
DeerFlow поддерживает несколько поисковых систем, которые можно настроить в файле `.env` с помощью переменной `SEARCH_API`:
- **Tavily** (по умолчанию): Специализированный поисковый API для приложений ИИ
- Требуется `TAVILY_API_KEY` в вашем файле `.env`
- Зарегистрируйтесь на: https://app.tavily.com/home
- **DuckDuckGo**: Поисковая система, ориентированная на конфиденциальность
- Не требуется API-ключ
- **Brave Search**: Поисковая система, ориентированная на конфиденциальность, с расширенными функциями
- Требуется `BRAVE_SEARCH_API_KEY` в вашем файле `.env`
- Зарегистрируйтесь на: https://brave.com/search/api/
- **Arxiv**: Поиск научных статей для академических исследований
- Не требуется API-ключ
- Специализируется на научных и академических статьях
Чтобы настроить предпочитаемую поисковую систему, установите переменную `SEARCH_API` в вашем файле `.env`:
```bash
# Выберите одно: tavily, duckduckgo, brave_search, arxiv
SEARCH_API=tavily
```
## Особенности
### Ключевые возможности
- 🤖 **Интеграция LLM**
- Поддерживает интеграцию большинства моделей через [litellm](https://docs.litellm.ai/docs/providers).
- Поддержка моделей с открытым исходным кодом, таких как Qwen
- API-интерфейс, совместимый с OpenAI
- Многоуровневая система LLM для задач различной сложности
### Инструменты и интеграции MCP
- 🔍 **Поиск и извлечение**
- Веб-поиск через Tavily, Brave Search и другие
- Сканирование с Jina
- Расширенное извлечение контента
- 🔗 **Бесшовная интеграция MCP**
- Расширение возможностей для доступа к частным доменам, графам знаний, веб-браузингу и многому другому
- Облегчает интеграцию различных исследовательских инструментов и методологий
### Человеческое взаимодействие
- 🧠 **Человек в контуре**
- Поддерживает интерактивное изменение планов исследования с использованием естественного языка
- Поддерживает автоматическое принятие планов исследования
- 📝 **Пост-редактирование отчетов**
- Поддерживает блочное редактирование в стиле Notion
- Позволяет совершенствовать с помощью ИИ, включая полировку, сокращение и расширение предложений
- Работает на [tiptap](https://tiptap.dev/)
### Создание контента
- 🎙️ **Генерация подкастов и презентаций**
- Генерация сценариев подкастов и синтез аудио с помощью ИИ
- Автоматическое создание простых презентаций PowerPoint
- Настраиваемые шаблоны для индивидуального контента
## Архитектура
DeerFlow реализует модульную архитектуру системы с несколькими агентами, предназначенную для автоматизированных исследований и анализа кода. Система построена на LangGraph, обеспечивающей гибкий рабочий процесс на основе состояний, где компоненты взаимодействуют через четко определенную систему передачи сообщений.
![Диаграмма архитектуры](./assets/architecture.png)
> Посмотрите вживую на [deerflow.tech](https://deerflow.tech/#multi-agent-architecture)
В системе используется оптимизированный рабочий процесс со следующими компонентами:
1. **Координатор**: Точка входа, управляющая жизненным циклом рабочего процесса
- Инициирует процесс исследования на основе пользовательского ввода
- Делегирует задачи планировщику, когда это необходимо
- Выступает в качестве основного интерфейса между пользователем и системой
2. **Планировщик**: Стратегический компонент для декомпозиции и планирования задач
- Анализирует цели исследования и создает структурированные планы выполнения
- Определяет, достаточно ли доступного контекста или требуется дополнительное исследование
- Управляет потоком исследования и решает, когда генерировать итоговый отчет
3. **Исследовательская команда**: Набор специализированных агентов, которые выполняют план:
- **Исследователь**: Проводит веб-поиск и сбор информации с использованием таких инструментов, как поисковые системы, сканирование и даже сервисы MCP.
- **Программист**: Обрабатывает анализ кода, выполнение и технические задачи с помощью инструмента Python REPL.
Каждый агент имеет доступ к определенным инструментам, оптимизированным для его роли, и работает в рамках фреймворка LangGraph
4. **Репортер**: Процессор финальной стадии для результатов исследования
- Агрегирует находки исследовательской команды
- Обрабатывает и структурирует собранную информацию
- Генерирует комплексные исследовательские отчеты
## Интеграция преобразования текста в речь
DeerFlow теперь включает функцию преобразования текста в речь (TTS), которая позволяет конвертировать исследовательские отчеты в речь. Эта функция использует API TTS volcengine для генерации высококачественного аудио из текста. Также можно настраивать такие параметры, как скорость, громкость и тон.
### Использование API TTS
Вы можете получить доступ к функциональности TTS через конечную точку `/api/tts`:
```bash
# Пример вызова API с использованием curl
curl --location 'http://localhost:8000/api/tts' \
--header 'Content-Type: application/json' \
--data '{
"text": "Это тест функциональности преобразования текста в речь.",
"speed_ratio": 1.0,
"volume_ratio": 1.0,
"pitch_ratio": 1.0
}' \
--output speech.mp3
```
## Разработка
### Тестирование
Запустите набор тестов:
```bash
# Запустить все тесты
make test
# Запустить определенный тестовый файл
pytest tests/integration/test_workflow.py
# Запустить с покрытием
make coverage
```
### Качество кода
```bash
# Запустить линтинг
make lint
# Форматировать код
make format
```
### Отладка с LangGraph Studio
DeerFlow использует LangGraph для своей архитектуры рабочего процесса. Вы можете использовать LangGraph Studio для отладки и визуализации рабочего процесса в реальном времени.
#### Запуск LangGraph Studio локально
DeerFlow включает конфигурационный файл `langgraph.json`, который определяет структуру графа и зависимости для LangGraph Studio. Этот файл указывает на графы рабочего процесса, определенные в проекте, и автоматически загружает переменные окружения из файла `.env`.
##### Mac
```bash
# Установите менеджер пакетов uv, если у вас его нет
curl -LsSf https://astral.sh/uv/install.sh | sh
# Установите зависимости и запустите сервер LangGraph
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.12 langgraph dev --allow-blocking
```
##### Windows / Linux
```bash
# Установить зависимости
pip install -e .
pip install -U "langgraph-cli[inmem]"
# Запустить сервер LangGraph
langgraph dev
```
После запуска сервера LangGraph вы увидите несколько URL в терминале:
- API: http://127.0.0.1:2024
- Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- API Docs: http://127.0.0.1:2024/docs
Откройте ссылку Studio UI в вашем браузере для доступа к интерфейсу отладки.
#### Использование LangGraph Studio
В интерфейсе Studio вы можете:
1. Визуализировать граф рабочего процесса и видеть, как соединяются компоненты
2. Отслеживать выполнение в реальном времени, чтобы видеть, как данные проходят через систему
3. Исследовать состояние на каждом шаге рабочего процесса
4. Отлаживать проблемы путем изучения входов и выходов каждого компонента
5. Предоставлять обратную связь во время фазы планирования для уточнения планов исследования
Когда вы отправляете тему исследования в интерфейсе Studio, вы сможете увидеть весь процесс выполнения рабочего процесса, включая:
- Фазу планирования, где создается план исследования
- Цикл обратной связи, где вы можете модифицировать план
- Фазы исследования и написания для каждого раздела
- Генерацию итогового отчета
### Включение трассировки LangSmith
DeerFlow поддерживает трассировку LangSmith, чтобы помочь вам отладить и контролировать ваши рабочие процессы. Чтобы включить трассировку LangSmith:
1. Убедитесь, что в вашем файле `.env` есть следующие конфигурации (см. `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="xxx"
LANGSMITH_PROJECT="xxx"
```
2. Запустите трассировку и визуализируйте граф локально с LangSmith, выполнив:
```bash
langgraph dev
```
Это включит визуализацию трассировки в LangGraph Studio и отправит ваши трассировки в LangSmith для мониторинга и анализа.
## Docker
Вы также можете запустить этот проект с Docker.
Во-первых, вам нужно прочитать [конфигурацию](docs/configuration_guide.md) ниже. Убедитесь, что файлы `.env`, `.conf.yaml` готовы.
Во-вторых, чтобы построить Docker-образ вашего собственного веб-сервера:
```bash
docker build -t deer-flow-api .
```
Наконец, запустите Docker-контейнер с веб-сервером:
```bash
# Замените deer-flow-api-app на предпочитаемое вами имя контейнера
docker run -d -t -p 8000:8000 --env-file .env --name deer-flow-api-app deer-flow-api
# остановить сервер
docker stop deer-flow-api-app
```
### Docker Compose (включает как бэкенд, так и фронтенд)
DeerFlow предоставляет настройку docker-compose для легкого запуска бэкенда и фронтенда вместе:
```bash
# сборка docker-образа
docker compose build
# запуск сервера
docker compose up
```
## Примеры
Следующие примеры демонстрируют возможности DeerFlow:
### Исследовательские отчеты
1. **Отчет о OpenAI Sora** - Анализ инструмента ИИ Sora от OpenAI
- Обсуждаются функции, доступ, инженерия промптов, ограничения и этические соображения
- [Просмотреть полный отчет](examples/openai_sora_report.md)
2. **Отчет о протоколе Agent to Agent от Google** - Обзор протокола Agent to Agent (A2A) от Google
- Обсуждается его роль в коммуникации агентов ИИ и его отношение к протоколу Model Context Protocol (MCP) от Anthropic
- [Просмотреть полный отчет](examples/what_is_agent_to_agent_protocol.md)
3. **Что такое MCP?** - Комплексный анализ термина "MCP" в различных контекстах
- Исследует Model Context Protocol в ИИ, Монокальцийфосфат в химии и Микроканальные пластины в электронике
- [Просмотреть полный отчет](examples/what_is_mcp.md)
4. **Колебания цены Биткоина** - Анализ недавних движений цены Биткоина
- Исследует рыночные тренды, регуляторные влияния и технические индикаторы
- Предоставляет рекомендации на основе исторических данных
- [Просмотреть полный отчет](examples/bitcoin_price_fluctuation.md)
5. **Что такое LLM?** - Углубленное исследование больших языковых моделей
- Обсуждаются архитектура, обучение, приложения и этические соображения
- [Просмотреть полный отчет](examples/what_is_llm.md)
6. **Как использовать Claude для глубокого исследования?** - Лучшие практики и рабочие процессы для использования Claude в глубоком исследовании
- Охватывает инженерию промптов, анализ данных и интеграцию с другими инструментами
- [Просмотреть полный отчет](examples/how_to_use_claude_deep_research.md)
7. **Внедрение ИИ в здравоохранении: Влияющие факторы** - Анализ факторов, движущих внедрением ИИ в здравоохранении
- Обсуждаются технологии ИИ, качество данных, этические соображения, экономические оценки, организационная готовность и цифровая инфраструктура
- [Просмотреть полный отчет](examples/AI_adoption_in_healthcare.md)
8. **Влияние квантовых вычислений на криптографию** - Анализ влияния квантовых вычислений на криптографию
- Обсуждаются уязвимости классической криптографии, пост-квантовая криптография и криптографические решения, устойчивые к квантовым вычислениям
- [Просмотреть полный отчет](examples/Quantum_Computing_Impact_on_Cryptography.md)
9. **Ключевые моменты выступлений Криштиану Роналду** - Анализ выдающихся выступлений Криштиану Роналду
- Обсуждаются его карьерные достижения, международные голы и выступления в различных матчах
- [Просмотреть полный отчет](examples/Cristiano_Ronaldo's_Performance_Highlights.md)
Чтобы запустить эти примеры или создать собственные исследовательские отчеты, вы можете использовать следующие команды:
```bash
# Запустить с определенным запросом
uv run main.py "Какие факторы влияют на внедрение ИИ в здравоохранении?"
# Запустить с пользовательскими параметрами планирования
uv run main.py --max_plan_iterations 3 "Как квантовые вычисления влияют на криптографию?"
# Запустить в интерактивном режиме с встроенными вопросами
uv run main.py --interactive
# Или запустить с базовым интерактивным приглашением
uv run main.py
# Посмотреть все доступные опции
uv run main.py --help
```
### Интерактивный режим
Приложение теперь поддерживает интерактивный режим с встроенными вопросами как на английском, так и на китайском языках:
1. Запустите интерактивный режим:
```bash
uv run main.py --interactive
```
2. Выберите предпочитаемый язык (English или 中文)
3. Выберите из списка встроенных вопросов или выберите опцию задать собственный вопрос
4. Система обработает ваш вопрос и сгенерирует комплексный исследовательский отчет
### Человек в контуре
DeerFlow включает механизм "человек в контуре", который позволяет вам просматривать, редактировать и утверждать планы исследования перед их выполнением:
1. **Просмотр плана**: Когда активирован режим "человек в контуре", система представит сгенерированный план исследования для вашего просмотра перед выполнением
2. **Предоставление обратной связи**: Вы можете:
- Принять план, ответив `[ACCEPTED]`
- Отредактировать план, предоставив обратную связь (например, `[EDIT PLAN] Добавить больше шагов о технической реализации`)
- Система включит вашу обратную связь и сгенерирует пересмотренный план
3. **Автоматическое принятие**: Вы можете включить автоматическое принятие, чтобы пропустить процесс просмотра:
- Через API: Установите `auto_accepted_plan: true` в вашем запросе
4. **Интеграция API**: При использовании API вы можете предоставить обратную связь через параметр `feedback`:
```json
{
"messages": [{ "role": "user", "content": "Что такое квантовые вычисления?" }],
"thread_id": "my_thread_id",
"auto_accepted_plan": false,
"feedback": "[EDIT PLAN] Включить больше о квантовых алгоритмах"
}
```
### Аргументы командной строки
Приложение поддерживает несколько аргументов командной строки для настройки его поведения:
- **query**: Запрос исследования для обработки (может состоять из нескольких слов)
- **--interactive**: Запустить в интерактивном режиме с встроенными вопросами
- **--max_plan_iterations**: Максимальное количество циклов планирования (по умолчанию: 1)
- **--max_step_num**: Максимальное количество шагов в плане исследования (по умолчанию: 3)
- **--debug**: Включить подробное логирование отладки
## FAQ
Пожалуйста, обратитесь к [FAQ.md](docs/FAQ.md) для получения дополнительной информации.
## Лицензия
Этот проект имеет открытый исходный код и доступен под [Лицензией MIT](./LICENSE).
## Благодарности
DeerFlow создан на основе невероятной работы сообщества открытого кода. Мы глубоко благодарны всем проектам и контрибьюторам, чьи усилия сделали DeerFlow возможным. Поистине, мы стоим на плечах гигантов.
Мы хотели бы выразить искреннюю признательность следующим проектам за их неоценимый вклад:
- **[LangChain](https://github.com/langchain-ai/langchain)**: Их исключительный фреймворк обеспечивает наши взаимодействия и цепочки LLM, позволяя бесшовную интеграцию и функциональность.
- **[LangGraph](https://github.com/langchain-ai/langgraph)**: Их инновационный подход к оркестровке многоагентных систем сыграл решающую роль в обеспечении сложных рабочих процессов DeerFlow.
Эти проекты являются примером преобразующей силы сотрудничества в области открытого кода, и мы гордимся тем, что строим на их основе.
### Ключевые контрибьюторы
Сердечная благодарность основным авторам `DeerFlow`, чье видение, страсть и преданность делу вдохнули жизнь в этот проект:
- **[Daniel Walnut](https://github.com/hetaoBackend/)**
- **[Henry Li](https://github.com/magiccube/)**
Ваша непоколебимая приверженность и опыт стали движущей силой успеха DeerFlow. Мы считаем за честь иметь вас во главе этого путешествия.
## История звезд
[![Star History Chart](https://api.star-history.com/svg?repos=bytedance/deer-flow&type=Date)](https://star-history.com/#bytedance/deer-flow&Date)
+27 -1
View File
@@ -3,7 +3,7 @@
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md)
[English](./README.md) | [简体中文](./README_zh.md) | [日本語](./README_ja.md) | [Deutsch](./README_de.md) | [Español](./README_es.md) | [Русский](./README_ru.md) |[Portuguese](./README_pt.md)
> 源于开源,回馈开源。
@@ -31,6 +31,13 @@ https://github.com/user-attachments/assets/f3786598-1f2a-4d07-919e-8b99dfa1de3e
- [如何装饰租赁公寓?](https://deerflow.tech/chat?replay=rental-apartment-decoration)
- [访问我们的官方网站探索更多回放示例。](https://deerflow.tech/#case-studies)
### 火山引擎
目前,DeerFlow 已正式入驻[火山引擎的 FaaS 应用中心](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/market),用户可通过体验链接进行在线体验,直观感受其强大功能与便捷操作;同时,为满足不同用户的部署需求,DeerFlow 支持基于火山引擎一键部署,点击部署链接即可快速完成部署流程,开启高效研究之旅。[快来看看吧](https://console.volcengine.com/vefaas/region:vefaas+cn-beijing/market)~
<img width="1800" alt="截屏2025-06-12 13 25 12" src="https://github.com/user-attachments/assets/73c15966-6b79-4dc0-8803-efdaf7c4015e" />
---
## 📑 目录
@@ -322,6 +329,25 @@ langgraph dev
- 每个部分的研究和写作阶段
- 最终报告生成
### 启用 LangSmith 追踪
DeerFlow 支持 LangSmith 追踪功能,帮助您调试和监控工作流。要启用 LangSmith 追踪:
1. 确保您的 `.env` 文件中有以下配置(参见 `.env.example`):
```bash
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
LANGSMITH_API_KEY="xxx"
LANGSMITH_PROJECT="xxx"
```
2. 通过运行以下命令本地启动 LangSmith 追踪:
```bash
langgraph dev
```
这将在 LangGraph Studio 中启用追踪可视化,并将您的追踪发送到 LangSmith 进行监控和分析。
## Docker
您也可以使用 Docker 运行此项目。
+3 -2
View File
@@ -10,9 +10,10 @@ IF "%MODE%"=="development" GOTO DEV
:PROD
echo Starting DeerFlow in [PRODUCTION] mode...
uv run server.py
start uv run server.py
cd web
pnpm start
start pnpm start
REM Wait for user to close
GOTO END
:DEV
+4 -2
View File
@@ -11,6 +11,8 @@ if [ "$1" = "--dev" -o "$1" = "-d" -o "$1" = "dev" -o "$1" = "development" ]; th
wait
else
echo -e "Starting DeerFlow in [PRODUCTION] mode...\n"
uv run server.py
cd web && pnpm start
uv run server.py & SERVER_PID=$$!
cd web && pnpm start & WEB_PID=$$!
trap "kill $$SERVER_PID $$WEB_PID" SIGINT SIGTERM
wait
fi
+1 -1
View File
@@ -49,7 +49,7 @@ BASIC_MODEL:
BASIC_MODEL:
base_url: "https://api.deepseek.com"
model: "deepseek-chat"
api_key: YOU_API_KEY
api_key: YOUR_API_KEY
# An example of Google Gemini models using OpenAI-Compatible interface
BASIC_MODEL:
+4
View File
@@ -37,6 +37,7 @@ dependencies = [
[project.optional-dependencies]
dev = [
"black>=24.2.0",
"langgraph-cli[inmem]>=0.2.10",
]
test = [
"pytest>=7.4.0",
@@ -52,6 +53,9 @@ filterwarnings = [
"ignore::UserWarning",
]
[tool.coverage.report]
fail_under = 25
[tool.hatch.build.targets.wheel]
packages = ["src"]
+25 -11
View File
@@ -7,7 +7,8 @@ Server script for running the DeerFlow API.
import argparse
import logging
import signal
import sys
import uvicorn
# Configure logging
@@ -18,6 +19,17 @@ logging.basicConfig(
logger = logging.getLogger(__name__)
def handle_shutdown(signum, frame):
"""Handle graceful shutdown on SIGTERM/SIGINT"""
logger.info("Received shutdown signal. Starting graceful shutdown...")
sys.exit(0)
# Register signal handlers
signal.signal(signal.SIGTERM, handle_shutdown)
signal.signal(signal.SIGINT, handle_shutdown)
if __name__ == "__main__":
# Parse command line arguments
parser = argparse.ArgumentParser(description="Run the DeerFlow API server")
@@ -50,16 +62,18 @@ if __name__ == "__main__":
# Determine reload setting
reload = False
# Command line arguments override defaults
if args.reload:
reload = True
logger.info("Starting DeerFlow API server")
uvicorn.run(
"src.server:app",
host=args.host,
port=args.port,
reload=reload,
log_level=args.log_level,
)
try:
logger.info(f"Starting DeerFlow API server on {args.host}:{args.port}")
uvicorn.run(
"src.server:app",
host=args.host,
port=args.port,
reload=reload,
log_level=args.log_level,
)
except Exception as e:
logger.error(f"Failed to start server: {str(e)}")
sys.exit(1)
+2 -2
View File
@@ -1,6 +1,6 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from .agents import research_agent, coder_agent
from .agents import create_agent
__all__ = ["research_agent", "coder_agent"]
__all__ = ["create_agent"]
-13
View File
@@ -4,12 +4,6 @@
from langgraph.prebuilt import create_react_agent
from src.prompts import apply_prompt_template
from src.tools import (
crawl_tool,
python_repl_tool,
web_search_tool,
)
from src.llms.llm import get_llm_by_type
from src.config.agents import AGENT_LLM_MAP
@@ -23,10 +17,3 @@ def create_agent(agent_name: str, agent_type: str, tools: list, prompt_template:
tools=tools,
prompt=lambda state: apply_prompt_template(prompt_template, state),
)
# Create agents using the factory function
research_agent = create_agent(
"researcher", "researcher", [web_search_tool, crawl_tool], "researcher"
)
coder_agent = create_agent("coder", "coder", [python_repl_tool], "coder")
+1 -2
View File
@@ -1,7 +1,7 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from .tools import SEARCH_MAX_RESULTS, SELECTED_SEARCH_ENGINE, SearchEngine
from .tools import SELECTED_SEARCH_ENGINE, SearchEngine
from .loader import load_yaml_config
from .questions import BUILT_IN_QUESTIONS, BUILT_IN_QUESTIONS_ZH_CN
@@ -42,7 +42,6 @@ __all__ = [
# Other configurations
"TEAM_MEMBERS",
"TEAM_MEMBER_CONFIGRATIONS",
"SEARCH_MAX_RESULTS",
"SELECTED_SEARCH_ENGINE",
"SearchEngine",
"BUILT_IN_QUESTIONS",
+1
View File
@@ -16,4 +16,5 @@ AGENT_LLM_MAP: dict[str, LLMType] = {
"podcast_script_writer": "basic",
"ppt_composer": "basic",
"prose_writer": "basic",
"prompt_enhancer": "basic",
}
+9 -1
View File
@@ -2,19 +2,27 @@
# SPDX-License-Identifier: MIT
import os
from dataclasses import dataclass, fields
from dataclasses import dataclass, field, fields
from typing import Any, Optional
from langchain_core.runnables import RunnableConfig
from src.rag.retriever import Resource
from src.config.report_style import ReportStyle
@dataclass(kw_only=True)
class Configuration:
"""The configurable fields."""
resources: list[Resource] = field(
default_factory=list
) # Resources to be used for the research
max_plan_iterations: int = 1 # Maximum number of plan iterations
max_step_num: int = 3 # Maximum number of steps in a plan
max_search_results: int = 3 # Maximum number of search results
mcp_settings: dict = None # MCP settings, including dynamic loaded tools
report_style: str = ReportStyle.ACADEMIC.value # Report style
@classmethod
def from_runnable_config(
+3 -1
View File
@@ -12,12 +12,14 @@ def replace_env_vars(value: str) -> str:
return value
if value.startswith("$"):
env_var = value[1:]
return os.getenv(env_var, value)
return os.getenv(env_var, env_var)
return value
def process_dict(config: Dict[str, Any]) -> Dict[str, Any]:
"""Recursively process dictionary to replace environment variables."""
if not config:
return {}
result = {}
for key, value in config.items():
if isinstance(value, dict):
+8
View File
@@ -0,0 +1,8 @@
import enum
class ReportStyle(enum.Enum):
ACADEMIC = "academic"
POPULAR_SCIENCE = "popular_science"
NEWS = "news"
SOCIAL_MEDIA = "social_media"
+7 -1
View File
@@ -17,4 +17,10 @@ class SearchEngine(enum.Enum):
# Tool configuration
SELECTED_SEARCH_ENGINE = os.getenv("SEARCH_API", SearchEngine.TAVILY.value)
SEARCH_MAX_RESULTS = 3
class RAGProvider(enum.Enum):
RAGFLOW = "ragflow"
SELECTED_RAG_PROVIDER = os.getenv("RAG_PROVIDER")
+3 -4
View File
@@ -3,8 +3,7 @@
from .article import Article
from .crawler import Crawler
from .jina_client import JinaClient
from .readability_extractor import ReadabilityExtractor
__all__ = [
"Article",
"Crawler",
]
__all__ = ["Article", "Crawler", "JinaClient", "ReadabilityExtractor"]
-10
View File
@@ -26,13 +26,3 @@ class Crawler:
article = extractor.extract_article(html)
article.url = url
return article
if __name__ == "__main__":
if len(sys.argv) == 2:
url = sys.argv[1]
else:
url = "https://fintel.io/zh-hant/s/br/nvdc34"
crawler = Crawler()
article = crawler.crawl(url)
print(article.to_markdown())
+23
View File
@@ -3,6 +3,7 @@
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from src.prompts.planner_model import StepType
from .types import State
from .nodes import (
@@ -17,6 +18,22 @@ from .nodes import (
)
def continue_to_running_research_team(state: State):
current_plan = state.get("current_plan")
if not current_plan or not current_plan.steps:
return "planner"
if all(step.execution_res for step in current_plan.steps):
return "planner"
for step in current_plan.steps:
if not step.execution_res:
break
if step.step_type and step.step_type == StepType.RESEARCH:
return "researcher"
if step.step_type and step.step_type == StepType.PROCESSING:
return "coder"
return "planner"
def _build_base_graph():
"""Build and return the base state graph with all nodes and edges."""
builder = StateGraph(State)
@@ -29,6 +46,12 @@ def _build_base_graph():
builder.add_node("researcher", researcher_node)
builder.add_node("coder", coder_node)
builder.add_node("human_feedback", human_feedback_node)
builder.add_edge("background_investigator", "planner")
builder.add_conditional_edges(
"research_team",
continue_to_running_research_team,
["planner", "researcher", "coder"],
)
builder.add_edge("reporter", END)
return builder
+120 -63
View File
@@ -3,6 +3,7 @@
import json
import logging
import os
from typing import Annotated, Literal
from langchain_core.messages import AIMessage, HumanMessage
@@ -11,31 +12,31 @@ from langchain_core.tools import tool
from langgraph.types import Command, interrupt
from langchain_mcp_adapters.client import MultiServerMCPClient
from src.agents.agents import coder_agent, research_agent, create_agent
from src.agents import create_agent
from src.tools.search import LoggedTavilySearch
from src.tools import (
crawl_tool,
web_search_tool,
get_web_search_tool,
get_retriever_tool,
python_repl_tool,
)
from src.config.agents import AGENT_LLM_MAP
from src.config.configuration import Configuration
from src.llms.llm import get_llm_by_type
from src.prompts.planner_model import Plan, StepType
from src.prompts.planner_model import Plan
from src.prompts.template import apply_prompt_template
from src.utils.json_utils import repair_json_output
from .types import State
from ..config import SEARCH_MAX_RESULTS, SELECTED_SEARCH_ENGINE, SearchEngine
from ..config import SELECTED_SEARCH_ENGINE, SearchEngine
logger = logging.getLogger(__name__)
@tool
def handoff_to_planner(
task_title: Annotated[str, "The title of the task to be handed off."],
research_topic: Annotated[str, "The topic of the research task to be handed off."],
locale: Annotated[str, "The user's detected language locale (e.g., en-US, zh-CN)."],
):
"""Handoff to planner agent to do plan."""
@@ -44,33 +45,37 @@ def handoff_to_planner(
return
def background_investigation_node(state: State) -> Command[Literal["planner"]]:
def background_investigation_node(state: State, config: RunnableConfig):
logger.info("background investigation node is running.")
query = state["messages"][-1].content
if SELECTED_SEARCH_ENGINE == SearchEngine.TAVILY:
searched_content = LoggedTavilySearch(max_results=SEARCH_MAX_RESULTS).invoke(
{"query": query}
)
background_investigation_results = None
configurable = Configuration.from_runnable_config(config)
query = state.get("research_topic")
background_investigation_results = None
if SELECTED_SEARCH_ENGINE == SearchEngine.TAVILY.value:
searched_content = LoggedTavilySearch(
max_results=configurable.max_search_results
).invoke(query)
if isinstance(searched_content, list):
background_investigation_results = [
{"title": elem["title"], "content": elem["content"]}
for elem in searched_content
f"## {elem['title']}\n\n{elem['content']}" for elem in searched_content
]
return {
"background_investigation_results": "\n\n".join(
background_investigation_results
)
}
else:
logger.error(
f"Tavily search returned malformed response: {searched_content}"
)
else:
background_investigation_results = web_search_tool.invoke(query)
return Command(
update={
"background_investigation_results": json.dumps(
background_investigation_results, ensure_ascii=False
)
},
goto="planner",
)
background_investigation_results = get_web_search_tool(
configurable.max_search_results
).invoke(query)
return {
"background_investigation_results": json.dumps(
background_investigation_results, ensure_ascii=False
)
}
def planner_node(
@@ -82,10 +87,8 @@ def planner_node(
plan_iterations = state["plan_iterations"] if state.get("plan_iterations", 0) else 0
messages = apply_prompt_template("planner", state, configurable)
if (
plan_iterations == 0
and state.get("enable_background_investigation")
and state.get("background_investigation_results")
if state.get("enable_background_investigation") and state.get(
"background_investigation_results"
):
messages += [
{
@@ -201,10 +204,11 @@ def human_feedback_node(
def coordinator_node(
state: State,
state: State, config: RunnableConfig
) -> Command[Literal["planner", "background_investigator", "__end__"]]:
"""Coordinator node that communicate with customers."""
logger.info("Coordinator talking.")
configurable = Configuration.from_runnable_config(config)
messages = apply_prompt_template("coordinator", state)
response = (
get_llm_by_type(AGENT_LLM_MAP["coordinator"])
@@ -215,6 +219,7 @@ def coordinator_node(
goto = "__end__"
locale = state.get("locale", "en-US") # Default locale if not specified
research_topic = state.get("research_topic", "")
if len(response.tool_calls) > 0:
goto = "planner"
@@ -225,8 +230,11 @@ def coordinator_node(
for tool_call in response.tool_calls:
if tool_call.get("name", "") != "handoff_to_planner":
continue
if tool_locale := tool_call.get("args", {}).get("locale"):
locale = tool_locale
if tool_call.get("args", {}).get("locale") and tool_call.get(
"args", {}
).get("research_topic"):
locale = tool_call.get("args", {}).get("locale")
research_topic = tool_call.get("args", {}).get("research_topic")
break
except Exception as e:
logger.error(f"Error processing tool calls: {e}")
@@ -237,14 +245,19 @@ def coordinator_node(
logger.debug(f"Coordinator response: {response}")
return Command(
update={"locale": locale},
update={
"locale": locale,
"research_topic": research_topic,
"resources": configurable.resources,
},
goto=goto,
)
def reporter_node(state: State):
def reporter_node(state: State, config: RunnableConfig):
"""Reporter node that write a final report."""
logger.info("Reporter write final report")
configurable = Configuration.from_runnable_config(config)
current_plan = state.get("current_plan")
input_ = {
"messages": [
@@ -254,7 +267,7 @@ def reporter_node(state: State):
],
"locale": state.get("locale", "en-US"),
}
invoke_messages = apply_prompt_template("reporter", input_)
invoke_messages = apply_prompt_template("reporter", input_, configurable)
observations = state.get("observations", [])
# Add a reminder about the new report format, citation style, and table usage
@@ -280,24 +293,10 @@ def reporter_node(state: State):
return {"final_report": response_content}
def research_team_node(
state: State,
) -> Command[Literal["planner", "researcher", "coder"]]:
def research_team_node(state: State):
"""Research team node that collaborates on tasks."""
logger.info("Research team is collaborating on tasks.")
current_plan = state.get("current_plan")
if not current_plan or not current_plan.steps:
return Command(goto="planner")
if all(step.execution_res for step in current_plan.steps):
return Command(goto="planner")
for step in current_plan.steps:
if not step.execution_res:
break
if step.step_type and step.step_type == StepType.RESEARCH:
return Command(goto="researcher")
if step.step_type and step.step_type == StepType.PROCESSING:
return Command(goto="coder")
return Command(goto="planner")
pass
async def _execute_agent_step(
@@ -308,23 +307,53 @@ async def _execute_agent_step(
observations = state.get("observations", [])
# Find the first unexecuted step
current_step = None
completed_steps = []
for step in current_plan.steps:
if not step.execution_res:
current_step = step
break
else:
completed_steps.append(step)
logger.info(f"Executing step: {step.title}")
if not current_step:
logger.warning("No unexecuted step found")
return Command(goto="research_team")
# Prepare the input for the agent
logger.info(f"Executing step: {current_step.title}, agent: {agent_name}")
# Format completed steps information
completed_steps_info = ""
if completed_steps:
completed_steps_info = "# Existing Research Findings\n\n"
for i, step in enumerate(completed_steps):
completed_steps_info += f"## Existing Finding {i + 1}: {step.title}\n\n"
completed_steps_info += f"<finding>\n{step.execution_res}\n</finding>\n\n"
# Prepare the input for the agent with completed steps info
agent_input = {
"messages": [
HumanMessage(
content=f"#Task\n\n##title\n\n{step.title}\n\n##description\n\n{step.description}\n\n##locale\n\n{state.get('locale', 'en-US')}"
content=f"{completed_steps_info}# Current Task\n\n## Title\n\n{current_step.title}\n\n## Description\n\n{current_step.description}\n\n## Locale\n\n{state.get('locale', 'en-US')}"
)
]
}
# Add citation reminder for researcher agent
if agent_name == "researcher":
if state.get("resources"):
resources_info = "**The user mentioned the following resource files:**\n\n"
for resource in state.get("resources"):
resources_info += f"- {resource.title} ({resource.description})\n"
agent_input["messages"].append(
HumanMessage(
content=resources_info
+ "\n\n"
+ "You MUST use the **local_search_tool** to retrieve the information from the resource files.",
)
)
agent_input["messages"].append(
HumanMessage(
content="IMPORTANT: DO NOT include inline citations in the text. Instead, track all sources and include a References section at the end using link reference format. Include an empty line between each citation for better readability. Use this format for each reference:\n- [Source Title](URL)\n\n- [Another Source](URL)",
@@ -333,15 +362,40 @@ async def _execute_agent_step(
)
# Invoke the agent
result = await agent.ainvoke(input=agent_input)
default_recursion_limit = 25
try:
env_value_str = os.getenv("AGENT_RECURSION_LIMIT", str(default_recursion_limit))
parsed_limit = int(env_value_str)
if parsed_limit > 0:
recursion_limit = parsed_limit
logger.info(f"Recursion limit set to: {recursion_limit}")
else:
logger.warning(
f"AGENT_RECURSION_LIMIT value '{env_value_str}' (parsed as {parsed_limit}) is not positive. "
f"Using default value {default_recursion_limit}."
)
recursion_limit = default_recursion_limit
except ValueError:
raw_env_value = os.getenv("AGENT_RECURSION_LIMIT")
logger.warning(
f"Invalid AGENT_RECURSION_LIMIT value: '{raw_env_value}'. "
f"Using default value {default_recursion_limit}."
)
recursion_limit = default_recursion_limit
logger.info(f"Agent input: {agent_input}")
result = await agent.ainvoke(
input=agent_input, config={"recursion_limit": recursion_limit}
)
# Process the result
response_content = result["messages"][-1].content
logger.debug(f"{agent_name.capitalize()} full response: {response_content}")
# Update the step with the execution result
step.execution_res = response_content
logger.info(f"Step '{step.title}' execution completed by {agent_name}")
current_step.execution_res = response_content
logger.info(f"Step '{current_step.title}' execution completed by {agent_name}")
return Command(
update={
@@ -361,7 +415,6 @@ async def _setup_and_execute_agent_step(
state: State,
config: RunnableConfig,
agent_type: str,
default_agent,
default_tools: list,
) -> Command[Literal["research_team"]]:
"""Helper function to set up an agent with appropriate tools and execute a step.
@@ -375,7 +428,6 @@ async def _setup_and_execute_agent_step(
state: The current state
config: The runnable config
agent_type: The type of agent ("researcher" or "coder")
default_agent: The default agent to use if no MCP servers are configured
default_tools: The default tools to add to the agent
Returns:
@@ -413,8 +465,9 @@ async def _setup_and_execute_agent_step(
agent = create_agent(agent_type, agent_type, loaded_tools, agent_type)
return await _execute_agent_step(state, agent, agent_type)
else:
# Use default agent if no MCP servers are configured
return await _execute_agent_step(state, default_agent, agent_type)
# Use default tools if no MCP servers are configured
agent = create_agent(agent_type, agent_type, default_tools, agent_type)
return await _execute_agent_step(state, agent, agent_type)
async def researcher_node(
@@ -422,12 +475,17 @@ async def researcher_node(
) -> Command[Literal["research_team"]]:
"""Researcher node that do research"""
logger.info("Researcher node is researching.")
configurable = Configuration.from_runnable_config(config)
tools = [get_web_search_tool(configurable.max_search_results), crawl_tool]
retriever_tool = get_retriever_tool(state.get("resources", []))
if retriever_tool:
tools.insert(0, retriever_tool)
logger.info(f"Researcher tools: {tools}")
return await _setup_and_execute_agent_step(
state,
config,
"researcher",
research_agent,
[web_search_tool, crawl_tool],
tools,
)
@@ -440,6 +498,5 @@ async def coder_node(
state,
config,
"coder",
coder_agent,
[python_repl_tool],
)
+3 -3
View File
@@ -1,12 +1,10 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import operator
from typing import Annotated
from langgraph.graph import MessagesState
from src.prompts.planner_model import Plan
from src.rag import Resource
class State(MessagesState):
@@ -14,7 +12,9 @@ class State(MessagesState):
# Runtime Variables
locale: str = "en-US"
research_topic: str = ""
observations: list[str] = []
resources: list[Resource] = []
plan_iterations: int = 0
current_plan: Plan | str = None
final_report: str = ""
+29 -13
View File
@@ -3,6 +3,7 @@
from pathlib import Path
from typing import Any, Dict
import os
from langchain_openai import ChatOpenAI
@@ -13,18 +14,40 @@ from src.config.agents import LLMType
_llm_cache: dict[LLMType, ChatOpenAI] = {}
def _get_env_llm_conf(llm_type: str) -> Dict[str, Any]:
"""
Get LLM configuration from environment variables.
Environment variables should follow the format: {LLM_TYPE}__{KEY}
e.g., BASIC_MODEL__api_key, BASIC_MODEL__base_url
"""
prefix = f"{llm_type.upper()}_MODEL__"
conf = {}
for key, value in os.environ.items():
if key.startswith(prefix):
conf_key = key[len(prefix) :].lower()
conf[conf_key] = value
return conf
def _create_llm_use_conf(llm_type: LLMType, conf: Dict[str, Any]) -> ChatOpenAI:
llm_type_map = {
"reasoning": conf.get("REASONING_MODEL"),
"basic": conf.get("BASIC_MODEL"),
"vision": conf.get("VISION_MODEL"),
"reasoning": conf.get("REASONING_MODEL", {}),
"basic": conf.get("BASIC_MODEL", {}),
"vision": conf.get("VISION_MODEL", {}),
}
llm_conf = llm_type_map.get(llm_type)
if not llm_conf:
raise ValueError(f"Unknown LLM type: {llm_type}")
if not isinstance(llm_conf, dict):
raise ValueError(f"Invalid LLM Conf: {llm_type}")
return ChatOpenAI(**llm_conf)
# Get configuration from environment variables
env_conf = _get_env_llm_conf(llm_type)
# Merge configurations, with environment variables taking precedence
merged_conf = {**llm_conf, **env_conf}
if not merged_conf:
raise ValueError(f"Unknown LLM Conf: {llm_type}")
return ChatOpenAI(**merged_conf)
def get_llm_by_type(
@@ -44,13 +67,6 @@ def get_llm_by_type(
return llm
# Initialize LLMs for different purposes - now these will be cached
basic_llm = get_llm_by_type("basic")
# In the future, we will use reasoning_llm and vl_llm for different purposes
# reasoning_llm = get_llm_by_type("reasoning")
# vl_llm = get_llm_by_type("vision")
if __name__ == "__main__":
print(basic_llm.invoke("Hello"))
+4
View File
@@ -0,0 +1,4 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
"""Prompt enhancer module for improving user prompts."""
+25
View File
@@ -0,0 +1,25 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from langgraph.graph import StateGraph
from src.prompt_enhancer.graph.enhancer_node import prompt_enhancer_node
from src.prompt_enhancer.graph.state import PromptEnhancerState
def build_graph():
"""Build and return the prompt enhancer workflow graph."""
# Build state graph
builder = StateGraph(PromptEnhancerState)
# Add the enhancer node
builder.add_node("enhancer", prompt_enhancer_node)
# Set entry point
builder.set_entry_point("enhancer")
# Set finish point
builder.set_finish_point("enhancer")
# Compile and return the graph
return builder.compile()
@@ -0,0 +1,67 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import logging
from langchain.schema import HumanMessage, SystemMessage
from src.config.agents import AGENT_LLM_MAP
from src.llms.llm import get_llm_by_type
from src.prompts.template import env, apply_prompt_template
from src.prompt_enhancer.graph.state import PromptEnhancerState
logger = logging.getLogger(__name__)
def prompt_enhancer_node(state: PromptEnhancerState):
"""Node that enhances user prompts using AI analysis."""
logger.info("Enhancing user prompt...")
model = get_llm_by_type(AGENT_LLM_MAP["prompt_enhancer"])
try:
# Create messages with context if provided
context_info = ""
if state.get("context"):
context_info = f"\n\nAdditional context: {state['context']}"
original_prompt_message = HumanMessage(
content=f"Please enhance this prompt:{context_info}\n\nOriginal prompt: {state['prompt']}"
)
messages = apply_prompt_template(
"prompt_enhancer/prompt_enhancer",
{
"messages": [original_prompt_message],
"report_style": state.get("report_style"),
},
)
# Get the response from the model
response = model.invoke(messages)
# Clean up the response - remove any extra formatting or comments
enhanced_prompt = response.content.strip()
# Remove common prefixes that might be added by the model
prefixes_to_remove = [
"Enhanced Prompt:",
"Enhanced prompt:",
"Here's the enhanced prompt:",
"Here is the enhanced prompt:",
"**Enhanced Prompt**:",
"**Enhanced prompt**:",
]
for prefix in prefixes_to_remove:
if enhanced_prompt.startswith(prefix):
enhanced_prompt = enhanced_prompt[len(prefix) :].strip()
break
logger.info("Prompt enhancement completed successfully")
logger.debug(f"Enhanced prompt: {enhanced_prompt}")
return {"output": enhanced_prompt}
except Exception as e:
logger.error(f"Error in prompt enhancement: {str(e)}")
return {"output": state["prompt"]}
+14
View File
@@ -0,0 +1,14 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from typing import TypedDict, Optional
from src.config.report_style import ReportStyle
class PromptEnhancerState(TypedDict):
"""State for the prompt enhancer workflow."""
prompt: str # Original prompt to enhance
context: Optional[str] # Additional context
report_style: Optional[ReportStyle] # Report style preference
output: Optional[str] # Enhanced prompt result
+24 -23
View File
@@ -57,14 +57,15 @@ Before creating a detailed plan, assess if there is sufficient context to answer
Different types of steps have different web search requirements:
1. **Research Steps** (`need_web_search: true`):
1. **Research Steps** (`need_search: true`):
- Retrieve information from the file with the URL with `rag://` or `http://` prefix specified by the user
- Gathering market data or industry trends
- Finding historical information
- Collecting competitor analysis
- Researching current events or news
- Finding statistical data or reports
2. **Data Processing Steps** (`need_web_search: false`):
2. **Data Processing Steps** (`need_search: false`):
- API calls and data extraction
- Database queries
- Raw data collection from existing sources
@@ -74,10 +75,10 @@ Different types of steps have different web search requirements:
## Exclusions
- **No Direct Calculations in Research Steps**:
- Research steps should only gather data and information
- All mathematical calculations must be handled by processing steps
- Numerical analysis must be delegated to processing steps
- Research steps focus on information gathering only
- Research steps should only gather data and information
- All mathematical calculations must be handled by processing steps
- Numerical analysis must be delegated to processing steps
- Research steps focus on information gathering only
## Analysis Framework
@@ -135,16 +136,16 @@ When planning information gathering, consider these key aspects and ensure COMPR
- To begin with, repeat user's requirement in your own words as `thought`.
- Rigorously assess if there is sufficient context to answer the question using the strict criteria above.
- If context is sufficient:
- Set `has_enough_context` to true
- No need to create information gathering steps
- Set `has_enough_context` to true
- No need to create information gathering steps
- If context is insufficient (default assumption):
- Break down the required information using the Analysis Framework
- Create NO MORE THAN {{ max_step_num }} focused and comprehensive steps that cover the most essential aspects
- Ensure each step is substantial and covers related information categories
- Prioritize breadth and depth within the {{ max_step_num }}-step constraint
- For each step, carefully assess if web search is needed:
- Research and external data gathering: Set `need_web_search: true`
- Internal data processing: Set `need_web_search: false`
- Break down the required information using the Analysis Framework
- Create NO MORE THAN {{ max_step_num }} focused and comprehensive steps that cover the most essential aspects
- Ensure each step is substantial and covers related information categories
- Prioritize breadth and depth within the {{ max_step_num }}-step constraint
- For each step, carefully assess if web search is needed:
- Research and external data gathering: Set `need_search: true`
- Internal data processing: Set `need_search: false`
- Specify the exact data to be collected in step's `description`. Include a `note` if necessary.
- Prioritize depth and volume of relevant information - limited information is not acceptable.
- Use the same language as the user to generate the plan.
@@ -156,10 +157,10 @@ Directly output the raw JSON format of `Plan` without "```json". The `Plan` inte
```ts
interface Step {
need_web_search: boolean; // Must be explicitly set for each step
need_search: boolean; // Must be explicitly set for each step
title: string;
description: string; // Specify exactly what data to collect
step_type: "research" | "processing"; // Indicates the nature of the step
description: string; // Specify exactly what data to collect. If the user input contains a link, please retain the full Markdown format when necessary.
step_type: "research" | "processing"; // Indicates the nature of the step
}
interface Plan {
@@ -167,7 +168,7 @@ interface Plan {
has_enough_context: boolean;
thought: string;
title: string;
steps: Step[]; // Research & Processing steps to get more context
steps: Step[]; // Research & Processing steps to get more context
}
```
@@ -179,8 +180,8 @@ interface Plan {
- Prioritize BOTH breadth (covering essential aspects) AND depth (detailed information on each aspect)
- Never settle for minimal information - the goal is a comprehensive, detailed final report
- Limited or insufficient information will lead to an inadequate final report
- Carefully assess each step's web search requirement based on its nature:
- Research steps (`need_web_search: true`) for gathering information
- Processing steps (`need_web_search: false`) for calculations and data processing
- Carefully assess each step's web search or retrieve from URL requirement based on its nature:
- Research steps (`need_search: true`) for gathering information
- Processing steps (`need_search: false`) for calculations and data processing
- Default to gathering more information unless the strictest sufficient context criteria are met
- Always use the language specified by the locale = **{{ locale }}**.
- Always use the language specified by the locale = **{{ locale }}**.
+2 -4
View File
@@ -13,9 +13,7 @@ class StepType(str, Enum):
class Step(BaseModel):
need_web_search: bool = Field(
..., description="Must be explicitly set for each step"
)
need_search: bool = Field(..., description="Must be explicitly set for each step")
title: str
description: str = Field(..., description="Specify exactly what data to collect")
step_type: StepType = Field(..., description="Indicates the nature of the step")
@@ -47,7 +45,7 @@ class Plan(BaseModel):
"title": "AI Market Research Plan",
"steps": [
{
"need_web_search": True,
"need_search": True,
"title": "Current AI Market Analysis",
"description": (
"Collect data on market size, growth rates, major players, and investment trends in AI sector."
@@ -0,0 +1,104 @@
---
CURRENT_TIME: {{ CURRENT_TIME }}
---
You are an expert prompt engineer. Your task is to enhance user prompts to make them more effective, specific, and likely to produce high-quality results from AI systems.
# Your Role
- Analyze the original prompt for clarity, specificity, and completeness
- Enhance the prompt by adding relevant details, context, and structure
- Make the prompt more actionable and results-oriented
- Preserve the user's original intent while improving effectiveness
{% if report_style == "academic" %}
# Enhancement Guidelines for Academic Style
1. **Add methodological rigor**: Include research methodology, scope, and analytical framework
2. **Specify academic structure**: Organize with clear thesis, literature review, analysis, and conclusions
3. **Clarify scholarly expectations**: Specify citation requirements, evidence standards, and academic tone
4. **Add theoretical context**: Include relevant theoretical frameworks and disciplinary perspectives
5. **Ensure precision**: Use precise terminology and avoid ambiguous language
6. **Include limitations**: Acknowledge scope limitations and potential biases
{% elif report_style == "popular_science" %}
# Enhancement Guidelines for Popular Science Style
1. **Add accessibility**: Transform technical concepts into relatable analogies and examples
2. **Improve narrative structure**: Organize as an engaging story with clear beginning, middle, and end
3. **Clarify audience expectations**: Specify general audience level and engagement goals
4. **Add human context**: Include real-world applications and human interest elements
5. **Make it compelling**: Ensure the prompt guides toward fascinating and wonder-inspiring content
6. **Include visual elements**: Suggest use of metaphors and descriptive language for complex concepts
{% elif report_style == "news" %}
# Enhancement Guidelines for News Style
1. **Add journalistic rigor**: Include fact-checking requirements, source verification, and objectivity standards
2. **Improve news structure**: Organize with inverted pyramid structure (most important information first)
3. **Clarify reporting expectations**: Specify timeliness, accuracy, and balanced perspective requirements
4. **Add contextual background**: Include relevant background information and broader implications
5. **Make it newsworthy**: Ensure the prompt focuses on current relevance and public interest
6. **Include attribution**: Specify source requirements and quote standards
{% elif report_style == "social_media" %}
# Enhancement Guidelines for Social Media Style
1. **Add engagement focus**: Include attention-grabbing elements, hooks, and shareability factors
2. **Improve platform structure**: Organize for specific platform requirements (character limits, hashtags, etc.)
3. **Clarify audience expectations**: Specify target demographic and engagement goals
4. **Add viral elements**: Include trending topics, relatable content, and interactive elements
5. **Make it shareable**: Ensure the prompt guides toward content that encourages sharing and discussion
6. **Include visual considerations**: Suggest emoji usage, formatting, and visual appeal elements
{% else %}
# General Enhancement Guidelines
1. **Add specificity**: Include relevant details, scope, and constraints
2. **Improve structure**: Organize the request logically with clear sections if needed
3. **Clarify expectations**: Specify desired output format, length, or style
4. **Add context**: Include background information that would help generate better results
5. **Make it actionable**: Ensure the prompt guides toward concrete, useful outputs
{% endif %}
# Output Requirements
- Output ONLY the enhanced prompt
- Do NOT include any explanations, comments, or meta-text
- Do NOT use phrases like "Enhanced Prompt:" or "Here's the enhanced version:"
- The output should be ready to use directly as a prompt
{% if report_style == "academic" %}
# Academic Style Examples
**Original**: "Write about AI"
**Enhanced**: "Conduct a comprehensive academic analysis of artificial intelligence applications across three key sectors: healthcare, education, and business. Employ a systematic literature review methodology to examine peer-reviewed sources from the past five years. Structure your analysis with: (1) theoretical framework defining AI and its taxonomies, (2) sector-specific case studies with quantitative performance metrics, (3) critical evaluation of implementation challenges and ethical considerations, (4) comparative analysis across sectors, and (5) evidence-based recommendations for future research directions. Maintain academic rigor with proper citations, acknowledge methodological limitations, and present findings with appropriate hedging language. Target length: 3000-4000 words with APA formatting."
**Original**: "Explain climate change"
**Enhanced**: "Provide a rigorous academic examination of anthropogenic climate change, synthesizing current scientific consensus and recent research developments. Structure your analysis as follows: (1) theoretical foundations of greenhouse effect and radiative forcing mechanisms, (2) systematic review of empirical evidence from paleoclimatic, observational, and modeling studies, (3) critical analysis of attribution studies linking human activities to observed warming, (4) evaluation of climate sensitivity estimates and uncertainty ranges, (5) assessment of projected impacts under different emission scenarios, and (6) discussion of research gaps and methodological limitations. Include quantitative data, statistical significance levels, and confidence intervals where appropriate. Cite peer-reviewed sources extensively and maintain objective, third-person academic voice throughout."
{% elif report_style == "popular_science" %}
# Popular Science Style Examples
**Original**: "Write about AI"
**Enhanced**: "Tell the fascinating story of how artificial intelligence is quietly revolutionizing our daily lives in ways most people never realize. Take readers on an engaging journey through three surprising realms: the hospital where AI helps doctors spot diseases faster than ever before, the classroom where intelligent tutors adapt to each student's learning style, and the boardroom where algorithms are making million-dollar decisions. Use vivid analogies (like comparing neural networks to how our brains work) and real-world examples that readers can relate to. Include 'wow factor' moments that showcase AI's incredible capabilities, but also honest discussions about current limitations. Write with infectious enthusiasm while maintaining scientific accuracy, and conclude with exciting possibilities that await us in the near future. Aim for 1500-2000 words that feel like a captivating conversation with a brilliant friend."
**Original**: "Explain climate change"
**Enhanced**: "Craft a compelling narrative that transforms the complex science of climate change into an accessible and engaging story for curious readers. Begin with a relatable scenario (like why your hometown weather feels different than when you were a kid) and use this as a gateway to explore the fascinating science behind our changing planet. Employ vivid analogies - compare Earth's atmosphere to a blanket, greenhouse gases to invisible heat-trapping molecules, and climate feedback loops to a snowball rolling downhill. Include surprising facts and 'aha moments' that will make readers think differently about the world around them. Weave in human stories of scientists making discoveries, communities adapting to change, and innovative solutions being developed. Balance the serious implications with hope and actionable insights, concluding with empowering steps readers can take. Write with wonder and curiosity, making complex concepts feel approachable and personally relevant."
{% elif report_style == "news" %}
# News Style Examples
**Original**: "Write about AI"
**Enhanced**: "Report on the current state and immediate impact of artificial intelligence across three critical sectors: healthcare, education, and business. Lead with the most newsworthy developments and recent breakthroughs that are affecting people today. Structure using inverted pyramid format: start with key findings and immediate implications, then provide essential background context, followed by detailed analysis and expert perspectives. Include specific, verifiable data points, recent statistics, and quotes from credible sources including industry leaders, researchers, and affected stakeholders. Address both benefits and concerns with balanced reporting, fact-check all claims, and provide proper attribution for all information. Focus on timeliness and relevance to current events, highlighting what's happening now and what readers need to know. Maintain journalistic objectivity while making the significance clear to a general news audience. Target 800-1200 words following AP style guidelines."
**Original**: "Explain climate change"
**Enhanced**: "Provide comprehensive news coverage of climate change that explains the current scientific understanding and immediate implications for readers. Lead with the most recent and significant developments in climate science, policy, or impacts that are making headlines today. Structure the report with: breaking developments first, essential background for understanding the issue, current scientific consensus with specific data and timeframes, real-world impacts already being observed, policy responses and debates, and what experts say comes next. Include quotes from credible climate scientists, policy makers, and affected communities. Present information objectively while clearly communicating the scientific consensus, fact-check all claims, and provide proper source attribution. Address common misconceptions with factual corrections. Focus on what's happening now, why it matters to readers, and what they can expect in the near future. Follow journalistic standards for accuracy, balance, and timeliness."
{% elif report_style == "social_media" %}
# Social Media Style Examples
**Original**: "Write about AI"
**Enhanced**: "Create engaging social media content about AI that will stop the scroll and spark conversations! Start with an attention-grabbing hook like 'You won't believe what AI just did in hospitals this week 🤯' and structure as a compelling thread or post series. Include surprising facts, relatable examples (like AI helping doctors spot diseases or personalizing your Netflix recommendations), and interactive elements that encourage sharing and comments. Use strategic hashtags (#AI #Technology #Future), incorporate relevant emojis for visual appeal, and include questions that prompt audience engagement ('Have you noticed AI in your daily life? Drop examples below! 👇'). Make complex concepts digestible with bite-sized explanations, trending analogies, and shareable quotes. Include a clear call-to-action and optimize for the specific platform (Twitter threads, Instagram carousel, LinkedIn professional insights, or TikTok-style quick facts). Aim for high shareability with content that feels both informative and entertaining."
**Original**: "Explain climate change"
**Enhanced**: "Develop viral-worthy social media content that makes climate change accessible and shareable without being preachy. Open with a scroll-stopping hook like 'The weather app on your phone is telling a bigger story than you think 📱🌡️' and break down complex science into digestible, engaging chunks. Use relatable comparisons (Earth's fever, atmosphere as a blanket), trending formats (before/after visuals, myth-busting series, quick facts), and interactive elements (polls, questions, challenges). Include strategic hashtags (#ClimateChange #Science #Environment), eye-catching emojis, and shareable graphics or infographics. Address common questions and misconceptions with clear, factual responses. Create content that encourages positive action rather than climate anxiety, ending with empowering steps followers can take. Optimize for platform-specific features (Instagram Stories, TikTok trends, Twitter threads) and include calls-to-action that drive engagement and sharing."
{% else %}
# General Examples
**Original**: "Write about AI"
**Enhanced**: "Write a comprehensive 1000-word analysis of artificial intelligence's current applications in healthcare, education, and business. Include specific examples of AI tools being used in each sector, discuss both benefits and challenges, and provide insights into future trends. Structure the response with clear sections for each industry and conclude with key takeaways."
**Original**: "Explain climate change"
**Enhanced**: "Provide a detailed explanation of climate change suitable for a general audience. Cover the scientific mechanisms behind global warming, major causes including greenhouse gas emissions, observable effects we're seeing today, and projected future impacts. Include specific data and examples, and explain the difference between weather and climate. Organize the response with clear headings and conclude with actionable steps individuals can take."
{% endif %}
+159 -2
View File
@@ -2,7 +2,21 @@
CURRENT_TIME: {{ CURRENT_TIME }}
---
You are a professional reporter responsible for writing clear, comprehensive reports based ONLY on provided information and verifiable facts.
{% if report_style == "academic" %}
You are a distinguished academic researcher and scholarly writer. Your report must embody the highest standards of academic rigor and intellectual discourse. Write with the precision of a peer-reviewed journal article, employing sophisticated analytical frameworks, comprehensive literature synthesis, and methodological transparency. Your language should be formal, technical, and authoritative, utilizing discipline-specific terminology with exactitude. Structure arguments logically with clear thesis statements, supporting evidence, and nuanced conclusions. Maintain complete objectivity, acknowledge limitations, and present balanced perspectives on controversial topics. The report should demonstrate deep scholarly engagement and contribute meaningfully to academic knowledge.
{% elif report_style == "popular_science" %}
You are an award-winning science communicator and storyteller. Your mission is to transform complex scientific concepts into captivating narratives that spark curiosity and wonder in everyday readers. Write with the enthusiasm of a passionate educator, using vivid analogies, relatable examples, and compelling storytelling techniques. Your tone should be warm, approachable, and infectious in its excitement about discovery. Break down technical jargon into accessible language without sacrificing accuracy. Use metaphors, real-world comparisons, and human interest angles to make abstract concepts tangible. Think like a National Geographic writer or a TED Talk presenter - engaging, enlightening, and inspiring.
{% elif report_style == "news" %}
You are an NBC News correspondent and investigative journalist with decades of experience in breaking news and in-depth reporting. Your report must exemplify the gold standard of American broadcast journalism: authoritative, meticulously researched, and delivered with the gravitas and credibility that NBC News is known for. Write with the precision of a network news anchor, employing the classic inverted pyramid structure while weaving compelling human narratives. Your language should be clear, authoritative, and accessible to prime-time television audiences. Maintain NBC's tradition of balanced reporting, thorough fact-checking, and ethical journalism. Think like Lester Holt or Andrea Mitchell - delivering complex stories with clarity, context, and unwavering integrity.
{% elif report_style == "social_media" %}
{% if locale == "zh-CN" %}
You are a popular 小红书 (Xiaohongshu) content creator specializing in lifestyle and knowledge sharing. Your report should embody the authentic, personal, and engaging style that resonates with 小红书 users. Write with genuine enthusiasm and a "姐妹们" (sisters) tone, as if sharing exciting discoveries with close friends. Use abundant emojis, create "种草" (grass-planting/recommendation) moments, and structure content for easy mobile consumption. Your writing should feel like a personal diary entry mixed with expert insights - warm, relatable, and irresistibly shareable. Think like a top 小红书 blogger who effortlessly combines personal experience with valuable information, making readers feel like they've discovered a hidden gem.
{% else %}
You are a viral Twitter content creator and digital influencer specializing in breaking down complex topics into engaging, shareable threads. Your report should be optimized for maximum engagement and viral potential across social media platforms. Write with energy, authenticity, and a conversational tone that resonates with global online communities. Use strategic hashtags, create quotable moments, and structure content for easy consumption and sharing. Think like a successful Twitter thought leader who can make any topic accessible, engaging, and discussion-worthy while maintaining credibility and accuracy.
{% endif %}
{% else %}
You are a professional reporter responsible for writing clear, comprehensive reports based ONLY on provided information and verifiable facts. Your report should adopt a professional tone.
{% endif %}
# Role
@@ -43,10 +57,40 @@ Structure your report in the following format:
- **Including images from the previous steps in the report is very helpful.**
5. **Survey Note** (for more comprehensive reports)
{% if report_style == "academic" %}
- **Literature Review & Theoretical Framework**: Comprehensive analysis of existing research and theoretical foundations
- **Methodology & Data Analysis**: Detailed examination of research methods and analytical approaches
- **Critical Discussion**: In-depth evaluation of findings with consideration of limitations and implications
- **Future Research Directions**: Identification of gaps and recommendations for further investigation
{% elif report_style == "popular_science" %}
- **The Bigger Picture**: How this research fits into the broader scientific landscape
- **Real-World Applications**: Practical implications and potential future developments
- **Behind the Scenes**: Interesting details about the research process and challenges faced
- **What's Next**: Exciting possibilities and upcoming developments in the field
{% elif report_style == "news" %}
- **NBC News Analysis**: In-depth examination of the story's broader implications and significance
- **Impact Assessment**: How these developments affect different communities, industries, and stakeholders
- **Expert Perspectives**: Insights from credible sources, analysts, and subject matter experts
- **Timeline & Context**: Chronological background and historical context essential for understanding
- **What's Next**: Expected developments, upcoming milestones, and stories to watch
{% elif report_style == "social_media" %}
{% if locale == "zh-CN" %}
- **【种草时刻】**: 最值得关注的亮点和必须了解的核心信息
- **【数据震撼】**: 用小红书风格展示重要统计数据和发现
- **【姐妹们的看法】**: 社区热议话题和大家的真实反馈
- **【行动指南】**: 实用建议和读者可以立即行动的清单
{% else %}
- **Thread Highlights**: Key takeaways formatted for maximum shareability
- **Data That Matters**: Important statistics and findings presented for viral potential
- **Community Pulse**: Trending discussions and reactions from the online community
- **Action Steps**: Practical advice and immediate next steps for readers
{% endif %}
{% else %}
- A more detailed, academic-style analysis.
- Include comprehensive sections covering all aspects of the topic.
- Can include comparative analysis, tables, and detailed feature breakdowns.
- This section is optional for shorter reports.
{% endif %}
6. **Key Citations**
- List all references at the end in link reference format.
@@ -56,7 +100,64 @@ Structure your report in the following format:
# Writing Guidelines
1. Writing style:
- Use professional tone.
{% if report_style == "academic" %}
**Academic Excellence Standards:**
- Employ sophisticated, formal academic discourse with discipline-specific terminology
- Construct complex, nuanced arguments with clear thesis statements and logical progression
- Use third-person perspective and passive voice where appropriate for objectivity
- Include methodological considerations and acknowledge research limitations
- Reference theoretical frameworks and cite relevant scholarly work patterns
- Maintain intellectual rigor with precise, unambiguous language
- Avoid contractions, colloquialisms, and informal expressions entirely
- Use hedging language appropriately ("suggests," "indicates," "appears to")
{% elif report_style == "popular_science" %}
**Science Communication Excellence:**
- Write with infectious enthusiasm and genuine curiosity about discoveries
- Transform technical jargon into vivid, relatable analogies and metaphors
- Use active voice and engaging narrative techniques to tell scientific stories
- Include "wow factor" moments and surprising revelations to maintain interest
- Employ conversational tone while maintaining scientific accuracy
- Use rhetorical questions to engage readers and guide their thinking
- Include human elements: researcher personalities, discovery stories, real-world impacts
- Balance accessibility with intellectual respect for your audience
{% elif report_style == "news" %}
**NBC News Editorial Standards:**
- Open with a compelling lede that captures the essence of the story in 25-35 words
- Use the classic inverted pyramid: most newsworthy information first, supporting details follow
- Write in clear, conversational broadcast style that sounds natural when read aloud
- Employ active voice and strong, precise verbs that convey action and urgency
- Attribute every claim to specific, credible sources using NBC's attribution standards
- Use present tense for ongoing situations, past tense for completed events
- Maintain NBC's commitment to balanced reporting with multiple perspectives
- Include essential context and background without overwhelming the main story
- Verify information through at least two independent sources when possible
- Clearly label speculation, analysis, and ongoing investigations
- Use transitional phrases that guide readers smoothly through the narrative
{% elif report_style == "social_media" %}
{% if locale == "zh-CN" %}
**小红书风格写作标准:**
- 用"姐妹们!"、"宝子们!"等亲切称呼开头,营造闺蜜聊天氛围
- 大量使用emoji表情符号增强表达力和视觉吸引力 ✨
- 采用"种草"语言:"真的绝了!"、"必须安利给大家!"、"不看后悔系列!"
- 使用小红书特色标题格式:"【干货分享】"、"【亲测有效】"、"【避雷指南】"
- 穿插个人感受和体验:"我当时看到这个数据真的震惊了!"
- 用数字和符号增强视觉效果:①②③、✅❌、🔥💡⭐
- 创造"金句"和可截图分享的内容段落
- 结尾用互动性语言:"你们觉得呢?"、"评论区聊聊!"、"记得点赞收藏哦!"
{% else %}
**Twitter/X Engagement Standards:**
- Open with attention-grabbing hooks that stop the scroll
- Use thread-style formatting with numbered points (1/n, 2/n, etc.)
- Incorporate strategic hashtags for discoverability and trending topics
- Write quotable, tweetable snippets that beg to be shared
- Use conversational, authentic voice with personality and wit
- Include relevant emojis to enhance meaning and visual appeal 🧵📊💡
- Create "thread-worthy" content with clear progression and payoff
- End with engagement prompts: "What do you think?", "Retweet if you agree"
{% endif %}
{% else %}
- Use a professional tone.
{% endif %}
- Be concise and precise.
- Avoid speculation.
- Support claims with evidence.
@@ -77,6 +178,62 @@ Structure your report in the following format:
- Use horizontal rules (---) to separate major sections.
- Track the sources of information but keep the main text clean and readable.
{% if report_style == "academic" %}
**Academic Formatting Specifications:**
- Use formal section headings with clear hierarchical structure (## Introduction, ### Methodology, #### Subsection)
- Employ numbered lists for methodological steps and logical sequences
- Use block quotes for important definitions or key theoretical concepts
- Include detailed tables with comprehensive headers and statistical data
- Use footnote-style formatting for additional context or clarifications
- Maintain consistent academic citation patterns throughout
- Use `code blocks` for technical specifications, formulas, or data samples
{% elif report_style == "popular_science" %}
**Science Communication Formatting:**
- Use engaging, descriptive headings that spark curiosity ("The Surprising Discovery That Changed Everything")
- Employ creative formatting like callout boxes for "Did You Know?" facts
- Use bullet points for easy-to-digest key findings
- Include visual breaks with strategic use of bold text for emphasis
- Format analogies and metaphors prominently to aid understanding
- Use numbered lists for step-by-step explanations of complex processes
- Highlight surprising statistics or findings with special formatting
{% elif report_style == "news" %}
**NBC News Formatting Standards:**
- Craft headlines that are informative yet compelling, following NBC's style guide
- Use NBC-style datelines and bylines for professional credibility
- Structure paragraphs for broadcast readability (1-2 sentences for digital, 2-3 for print)
- Employ strategic subheadings that advance the story narrative
- Format direct quotes with proper attribution and context
- Use bullet points sparingly, primarily for breaking news updates or key facts
- Include "BREAKING" or "DEVELOPING" labels for ongoing stories
- Format source attribution clearly: "according to NBC News," "sources tell NBC News"
- Use italics for emphasis on key terms or breaking developments
- Structure the story with clear sections: Lede, Context, Analysis, Looking Ahead
{% elif report_style == "social_media" %}
{% if locale == "zh-CN" %}
**小红书格式优化标准:**
- 使用吸睛标题配合emoji:"🔥【重磅】这个发现太震撼了!"
- 关键数据用醒目格式突出:「 重点数据 」或 ⭐ 核心发现 ⭐
- 适度使用大写强调:真的YYDS!、绝绝子!
- 用emoji作为分点符号:✨、🌟、、💯
- 创建话题标签区域:#科技前沿 #必看干货 #涨知识了
- 设置"划重点"总结区域,方便快速阅读
- 利用换行和空白营造手机阅读友好的版式
- 制作"金句卡片"格式,便于截图分享
- 使用分割线和特殊符号:「」『』【】━━━━━━
{% else %}
**Twitter/X Formatting Standards:**
- Use compelling headlines with strategic emoji placement 🧵⚡️🔥
- Format key insights as standalone, quotable tweet blocks
- Employ thread numbering for multi-part content (1/12, 2/12, etc.)
- Use bullet points with emoji bullets for visual appeal
- Include strategic hashtags at the end: #TechNews #Innovation #MustRead
- Create "TL;DR" summaries for quick consumption
- Use line breaks and white space for mobile readability
- Format "quotable moments" with clear visual separation
- Include call-to-action elements: "🔄 RT to share" "💬 What's your take?"
{% endif %}
{% endif %}
# Data Integrity
- Only use information explicitly provided in the input.
+4 -1
View File
@@ -11,6 +11,9 @@ You are dedicated to conducting thorough investigations using search tools and p
You have access to two types of tools:
1. **Built-in Tools**: These are always available:
{% if resources %}
- **local_search_tool**: For retrieving information from the local knowledge base when user mentioned in the messages.
{% endif %}
- **web_search_tool**: For performing web searches
- **crawl_tool**: For reading content from URLs
@@ -34,7 +37,7 @@ You have access to two types of tools:
3. **Plan the Solution**: Determine the best approach to solve the problem using the available tools.
4. **Execute the Solution**:
- Forget your previous knowledge, so you **should leverage the tools** to retrieve the information.
- Use the **web_search_tool** or other suitable search tool to perform a search with the provided keywords.
- Use the {% if resources %}**local_search_tool** or{% endif %}**web_search_tool** or other suitable search tool to perform a search with the provided keywords.
- When the task includes time range requirements:
- Incorporate appropriate time-based search parameters in your queries (e.g., "after:2020", "before:2023", or specific date ranges)
- Ensure search results respect the specified time constraints.
+8
View File
@@ -0,0 +1,8 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from .retriever import Retriever, Document, Resource
from .ragflow import RAGFlowProvider
from .builder import build_retriever
__all__ = [Retriever, Document, Resource, RAGFlowProvider, build_retriever]
+14
View File
@@ -0,0 +1,14 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from src.config.tools import SELECTED_RAG_PROVIDER, RAGProvider
from src.rag.ragflow import RAGFlowProvider
from src.rag.retriever import Retriever
def build_retriever() -> Retriever | None:
if SELECTED_RAG_PROVIDER == RAGProvider.RAGFLOW.value:
return RAGFlowProvider()
elif SELECTED_RAG_PROVIDER:
raise ValueError(f"Unsupported RAG provider: {SELECTED_RAG_PROVIDER}")
return None
+124
View File
@@ -0,0 +1,124 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import requests
from src.rag.retriever import Chunk, Document, Resource, Retriever
from urllib.parse import urlparse
class RAGFlowProvider(Retriever):
"""
RAGFlowProvider is a provider that uses RAGFlow to retrieve documents.
"""
api_url: str
api_key: str
page_size: int = 10
def __init__(self):
api_url = os.getenv("RAGFLOW_API_URL")
if not api_url:
raise ValueError("RAGFLOW_API_URL is not set")
self.api_url = api_url
api_key = os.getenv("RAGFLOW_API_KEY")
if not api_key:
raise ValueError("RAGFLOW_API_KEY is not set")
self.api_key = api_key
page_size = os.getenv("RAGFLOW_PAGE_SIZE")
if page_size:
self.page_size = int(page_size)
def query_relevant_documents(
self, query: str, resources: list[Resource] = []
) -> list[Document]:
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
}
dataset_ids: list[str] = []
document_ids: list[str] = []
for resource in resources:
dataset_id, document_id = parse_uri(resource.uri)
dataset_ids.append(dataset_id)
if document_id:
document_ids.append(document_id)
payload = {
"question": query,
"dataset_ids": dataset_ids,
"document_ids": document_ids,
"page_size": self.page_size,
}
response = requests.post(
f"{self.api_url}/api/v1/retrieval", headers=headers, json=payload
)
if response.status_code != 200:
raise Exception(f"Failed to query documents: {response.text}")
result = response.json()
data = result.get("data", {})
doc_aggs = data.get("doc_aggs", [])
docs: dict[str, Document] = {
doc.get("doc_id"): Document(
id=doc.get("doc_id"),
title=doc.get("doc_name"),
chunks=[],
)
for doc in doc_aggs
}
for chunk in data.get("chunks", []):
doc = docs.get(chunk.get("document_id"))
if doc:
doc.chunks.append(
Chunk(
content=chunk.get("content"),
similarity=chunk.get("similarity"),
)
)
return list(docs.values())
def list_resources(self, query: str | None = None) -> list[Resource]:
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
}
params = {}
if query:
params["name"] = query
response = requests.get(
f"{self.api_url}/api/v1/datasets", headers=headers, params=params
)
if response.status_code != 200:
raise Exception(f"Failed to list resources: {response.text}")
result = response.json()
resources = []
for item in result.get("data", []):
item = Resource(
uri=f"rag://dataset/{item.get('id')}",
title=item.get("name", ""),
description=item.get("description", ""),
)
resources.append(item)
return resources
def parse_uri(uri: str) -> tuple[str, str]:
parsed = urlparse(uri)
if parsed.scheme != "rag":
raise ValueError(f"Invalid URI: {uri}")
return parsed.path.split("/")[1], parsed.fragment
+80
View File
@@ -0,0 +1,80 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import abc
from pydantic import BaseModel, Field
class Chunk:
content: str
similarity: float
def __init__(self, content: str, similarity: float):
self.content = content
self.similarity = similarity
class Document:
"""
Document is a class that represents a document.
"""
id: str
url: str | None = None
title: str | None = None
chunks: list[Chunk] = []
def __init__(
self,
id: str,
url: str | None = None,
title: str | None = None,
chunks: list[Chunk] = [],
):
self.id = id
self.url = url
self.title = title
self.chunks = chunks
def to_dict(self) -> dict:
d = {
"id": self.id,
"content": "\n\n".join([chunk.content for chunk in self.chunks]),
}
if self.url:
d["url"] = self.url
if self.title:
d["title"] = self.title
return d
class Resource(BaseModel):
"""
Resource is a class that represents a resource.
"""
uri: str = Field(..., description="The URI of the resource")
title: str = Field(..., description="The title of the resource")
description: str | None = Field("", description="The description of the resource")
class Retriever(abc.ABC):
"""
Define a RAG provider, which can be used to query documents and resources.
"""
@abc.abstractmethod
def list_resources(self, query: str | None = None) -> list[Resource]:
"""
List resources from the rag provider.
"""
pass
@abc.abstractmethod
def query_relevant_documents(
self, query: str, resources: list[Resource] = []
) -> list[Document]:
"""
Query relevant documents from the resources.
"""
pass
+96 -13
View File
@@ -5,22 +5,28 @@ import base64
import json
import logging
import os
from typing import List, cast
from typing import Annotated, List, cast
from uuid import uuid4
from fastapi import FastAPI, HTTPException
from fastapi import FastAPI, HTTPException, Query
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import Response, StreamingResponse
from langchain_core.messages import AIMessageChunk, ToolMessage
from langchain_core.messages import AIMessageChunk, ToolMessage, BaseMessage
from langgraph.types import Command
from src.config.report_style import ReportStyle
from src.config.tools import SELECTED_RAG_PROVIDER
from src.graph.builder import build_graph_with_memory
from src.podcast.graph.builder import build_graph as build_podcast_graph
from src.ppt.graph.builder import build_graph as build_ppt_graph
from src.prose.graph.builder import build_graph as build_prose_graph
from src.prompt_enhancer.graph.builder import build_graph as build_prompt_enhancer_graph
from src.rag.builder import build_retriever
from src.rag.retriever import Resource
from src.server.chat_request import (
ChatMessage,
ChatRequest,
EnhancePromptRequest,
GeneratePodcastRequest,
GeneratePPTRequest,
GenerateProseRequest,
@@ -28,10 +34,17 @@ from src.server.chat_request import (
)
from src.server.mcp_request import MCPServerMetadataRequest, MCPServerMetadataResponse
from src.server.mcp_utils import load_mcp_tools
from src.server.rag_request import (
RAGConfigResponse,
RAGResourceRequest,
RAGResourcesResponse,
)
from src.tools import VolcengineTTS
logger = logging.getLogger(__name__)
INTERNAL_SERVER_ERROR_DETAIL = "Internal Server Error"
app = FastAPI(
title="DeerFlow API",
description="API for Deer",
@@ -59,26 +72,32 @@ async def chat_stream(request: ChatRequest):
_astream_workflow_generator(
request.model_dump()["messages"],
thread_id,
request.resources,
request.max_plan_iterations,
request.max_step_num,
request.max_search_results,
request.auto_accepted_plan,
request.interrupt_feedback,
request.mcp_settings,
request.enable_background_investigation,
request.report_style,
),
media_type="text/event-stream",
)
async def _astream_workflow_generator(
messages: List[ChatMessage],
messages: List[dict],
thread_id: str,
resources: List[Resource],
max_plan_iterations: int,
max_step_num: int,
max_search_results: int,
auto_accepted_plan: bool,
interrupt_feedback: str,
mcp_settings: dict,
enable_background_investigation,
enable_background_investigation: bool,
report_style: ReportStyle,
):
input_ = {
"messages": messages,
@@ -88,6 +107,7 @@ async def _astream_workflow_generator(
"observations": [],
"auto_accepted_plan": auto_accepted_plan,
"enable_background_investigation": enable_background_investigation,
"research_topic": messages[-1]["content"] if messages else "",
}
if not auto_accepted_plan and interrupt_feedback:
resume_msg = f"[{interrupt_feedback}]"
@@ -99,9 +119,12 @@ async def _astream_workflow_generator(
input_,
config={
"thread_id": thread_id,
"resources": resources,
"max_plan_iterations": max_plan_iterations,
"max_step_num": max_step_num,
"max_search_results": max_search_results,
"mcp_settings": mcp_settings,
"report_style": report_style.value,
},
stream_mode=["messages", "updates"],
subgraphs=True,
@@ -124,7 +147,7 @@ async def _astream_workflow_generator(
)
continue
message_chunk, message_metadata = cast(
tuple[AIMessageChunk, dict[str, any]], event_data
tuple[BaseMessage, dict[str, any]], event_data
)
event_stream_message: dict[str, any] = {
"thread_id": thread_id,
@@ -141,7 +164,7 @@ async def _astream_workflow_generator(
# Tool Message - Return the result of the tool call
event_stream_message["tool_call_id"] = message_chunk.tool_call_id
yield _make_event("tool_call_result", event_stream_message)
else:
elif isinstance(message_chunk, AIMessageChunk):
# AI Message - Raw message tokens
if message_chunk.tool_calls:
# AI Message - Tool Call
@@ -220,7 +243,7 @@ async def text_to_speech(request: TTSRequest):
)
except Exception as e:
logger.exception(f"Error in TTS endpoint: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@app.post("/api/podcast/generate")
@@ -234,7 +257,7 @@ async def generate_podcast(request: GeneratePodcastRequest):
return Response(content=audio_bytes, media_type="audio/mp3")
except Exception as e:
logger.exception(f"Error occurred during podcast generation: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@app.post("/api/ppt/generate")
@@ -253,13 +276,14 @@ async def generate_ppt(request: GeneratePPTRequest):
)
except Exception as e:
logger.exception(f"Error occurred during ppt generation: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@app.post("/api/prose/generate")
async def generate_prose(request: GenerateProseRequest):
try:
logger.info(f"Generating prose for prompt: {request.prompt}")
sanitized_prompt = request.prompt.replace("\r\n", "").replace("\n", "")
logger.info(f"Generating prose for prompt: {sanitized_prompt}")
workflow = build_prose_graph()
events = workflow.astream(
{
@@ -276,7 +300,51 @@ async def generate_prose(request: GenerateProseRequest):
)
except Exception as e:
logger.exception(f"Error occurred during prose generation: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@app.post("/api/prompt/enhance")
async def enhance_prompt(request: EnhancePromptRequest):
try:
sanitized_prompt = request.prompt.replace("\r\n", "").replace("\n", "")
logger.info(f"Enhancing prompt: {sanitized_prompt}")
# Convert string report_style to ReportStyle enum
report_style = None
if request.report_style:
try:
# Handle both uppercase and lowercase input
style_mapping = {
"ACADEMIC": ReportStyle.ACADEMIC,
"POPULAR_SCIENCE": ReportStyle.POPULAR_SCIENCE,
"NEWS": ReportStyle.NEWS,
"SOCIAL_MEDIA": ReportStyle.SOCIAL_MEDIA,
"academic": ReportStyle.ACADEMIC,
"popular_science": ReportStyle.POPULAR_SCIENCE,
"news": ReportStyle.NEWS,
"social_media": ReportStyle.SOCIAL_MEDIA,
}
report_style = style_mapping.get(
request.report_style, ReportStyle.ACADEMIC
)
except Exception:
# If invalid style, default to ACADEMIC
report_style = ReportStyle.ACADEMIC
else:
report_style = ReportStyle.ACADEMIC
workflow = build_prompt_enhancer_graph()
final_state = workflow.invoke(
{
"prompt": request.prompt,
"context": request.context,
"report_style": report_style,
}
)
return {"result": final_state["output"]}
except Exception as e:
logger.exception(f"Error occurred during prompt enhancement: {str(e)}")
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
@app.post("/api/mcp/server/metadata", response_model=MCPServerMetadataResponse)
@@ -314,5 +382,20 @@ async def mcp_server_metadata(request: MCPServerMetadataRequest):
except Exception as e:
if not isinstance(e, HTTPException):
logger.exception(f"Error in MCP server metadata endpoint: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(status_code=500, detail=INTERNAL_SERVER_ERROR_DETAIL)
raise
@app.get("/api/rag/config", response_model=RAGConfigResponse)
async def rag_config():
"""Get the config of the RAG."""
return RAGConfigResponse(provider=SELECTED_RAG_PROVIDER)
@app.get("/api/rag/resources", response_model=RAGResourcesResponse)
async def rag_resources(request: Annotated[RAGResourceRequest, Query()]):
"""Get the resources of the RAG."""
retriever = build_retriever()
if retriever:
return RAGResourcesResponse(resources=retriever.list_resources(request.query))
return RAGResourcesResponse(resources=[])
+22
View File
@@ -5,6 +5,9 @@ from typing import List, Optional, Union
from pydantic import BaseModel, Field
from src.rag.retriever import Resource
from src.config.report_style import ReportStyle
class ContentItem(BaseModel):
type: str = Field(..., description="The type of content (text, image, etc.)")
@@ -28,6 +31,9 @@ class ChatRequest(BaseModel):
messages: Optional[List[ChatMessage]] = Field(
[], description="History of messages between the user and the assistant"
)
resources: Optional[List[Resource]] = Field(
[], description="Resources to be used for the research"
)
debug: Optional[bool] = Field(False, description="Whether to enable debug logging")
thread_id: Optional[str] = Field(
"__default__", description="A specific conversation identifier"
@@ -38,6 +44,9 @@ class ChatRequest(BaseModel):
max_step_num: Optional[int] = Field(
3, description="The maximum number of steps in a plan"
)
max_search_results: Optional[int] = Field(
3, description="The maximum number of search results"
)
auto_accepted_plan: Optional[bool] = Field(
False, description="Whether to automatically accept the plan"
)
@@ -50,6 +59,9 @@ class ChatRequest(BaseModel):
enable_background_investigation: Optional[bool] = Field(
True, description="Whether to get background investigation before plan"
)
report_style: Optional[ReportStyle] = Field(
ReportStyle.ACADEMIC, description="The style of the report"
)
class TTSRequest(BaseModel):
@@ -82,3 +94,13 @@ class GenerateProseRequest(BaseModel):
command: Optional[str] = Field(
"", description="The user custom command of the prose writer"
)
class EnhancePromptRequest(BaseModel):
prompt: str = Field(..., description="The original prompt to enhance")
context: Optional[str] = Field(
"", description="Additional context about the intended use"
)
report_style: Optional[str] = Field(
"academic", description="The style of the report"
)
+28
View File
@@ -0,0 +1,28 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
from pydantic import BaseModel, Field
from src.rag.retriever import Resource
class RAGConfigResponse(BaseModel):
"""Response model for RAG config."""
provider: str | None = Field(
None, description="The provider of the RAG, default is ragflow"
)
class RAGResourceRequest(BaseModel):
"""Request model for RAG resource."""
query: str | None = Field(
None, description="The query of the resource need to be searched"
)
class RAGResourcesResponse(BaseModel):
"""Response model for RAG resources."""
resources: list[Resource] = Field(..., description="The resources of the RAG")
+4 -18
View File
@@ -5,28 +5,14 @@ import os
from .crawl import crawl_tool
from .python_repl import python_repl_tool
from .search import (
tavily_search_tool,
duckduckgo_search_tool,
brave_search_tool,
arxiv_search_tool,
)
from .retriever import get_retriever_tool
from .search import get_web_search_tool
from .tts import VolcengineTTS
from src.config import SELECTED_SEARCH_ENGINE, SearchEngine
# Map search engine names to their respective tools
search_tool_mappings = {
SearchEngine.TAVILY.value: tavily_search_tool,
SearchEngine.DUCKDUCKGO.value: duckduckgo_search_tool,
SearchEngine.BRAVE_SEARCH.value: brave_search_tool,
SearchEngine.ARXIV.value: arxiv_search_tool,
}
web_search_tool = search_tool_mappings.get(SELECTED_SEARCH_ENGINE, tavily_search_tool)
__all__ = [
"crawl_tool",
"web_search_tool",
"python_repl_tool",
"get_web_search_tool",
"get_retriever_tool",
"VolcengineTTS",
]
+77
View File
@@ -0,0 +1,77 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import logging
from typing import List, Optional, Type
from langchain_core.tools import BaseTool
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from pydantic import BaseModel, Field
from src.config.tools import SELECTED_RAG_PROVIDER
from src.rag import Document, Retriever, Resource, build_retriever
logger = logging.getLogger(__name__)
class RetrieverInput(BaseModel):
keywords: str = Field(description="search keywords to look up")
class RetrieverTool(BaseTool):
name: str = "local_search_tool"
description: str = (
"Useful for retrieving information from the file with `rag://` uri prefix, it should be higher priority than the web search or writing code. Input should be a search keywords."
)
args_schema: Type[BaseModel] = RetrieverInput
retriever: Retriever = Field(default_factory=Retriever)
resources: list[Resource] = Field(default_factory=list)
def _run(
self,
keywords: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> list[Document]:
logger.info(
f"Retriever tool query: {keywords}", extra={"resources": self.resources}
)
documents = self.retriever.query_relevant_documents(keywords, self.resources)
if not documents:
return "No results found from the local knowledge base."
return [doc.to_dict() for doc in documents]
async def _arun(
self,
keywords: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> list[Document]:
return self._run(keywords, run_manager.get_sync())
def get_retriever_tool(resources: List[Resource]) -> RetrieverTool | None:
if not resources:
return None
logger.info(f"create retriever tool: {SELECTED_RAG_PROVIDER}")
retriever = build_retriever()
if not retriever:
return None
return RetrieverTool(retriever=retriever, resources=resources)
if __name__ == "__main__":
resources = [
Resource(
uri="rag://dataset/1c7e2ea4362911f09a41c290d4b6a7f0",
title="西游记",
description="西游记是中国古代四大名著之一,讲述了唐僧师徒四人西天取经的故事。",
)
]
retriever_tool = get_retriever_tool(resources)
print(retriever_tool.name)
print(retriever_tool.description)
print(retriever_tool.args)
print(retriever_tool.invoke("三打白骨精"))
+43 -32
View File
@@ -9,7 +9,7 @@ from langchain_community.tools import BraveSearch, DuckDuckGoSearchResults
from langchain_community.tools.arxiv import ArxivQueryRun
from langchain_community.utilities import ArxivAPIWrapper, BraveSearchWrapper
from src.config import SEARCH_MAX_RESULTS
from src.config import SearchEngine, SELECTED_SEARCH_ENGINE
from src.tools.tavily_search.tavily_search_results_with_images import (
TavilySearchResultsWithImages,
)
@@ -18,41 +18,52 @@ from src.tools.decorators import create_logged_tool
logger = logging.getLogger(__name__)
# Create logged versions of the search tools
LoggedTavilySearch = create_logged_tool(TavilySearchResultsWithImages)
tavily_search_tool = LoggedTavilySearch(
name="web_search",
max_results=SEARCH_MAX_RESULTS,
include_raw_content=True,
include_images=True,
include_image_descriptions=True,
)
LoggedDuckDuckGoSearch = create_logged_tool(DuckDuckGoSearchResults)
duckduckgo_search_tool = LoggedDuckDuckGoSearch(
name="web_search", max_results=SEARCH_MAX_RESULTS
)
LoggedBraveSearch = create_logged_tool(BraveSearch)
brave_search_tool = LoggedBraveSearch(
name="web_search",
search_wrapper=BraveSearchWrapper(
api_key=os.getenv("BRAVE_SEARCH_API_KEY", ""),
search_kwargs={"count": SEARCH_MAX_RESULTS},
),
)
LoggedArxivSearch = create_logged_tool(ArxivQueryRun)
arxiv_search_tool = LoggedArxivSearch(
name="web_search",
api_wrapper=ArxivAPIWrapper(
top_k_results=SEARCH_MAX_RESULTS,
load_max_docs=SEARCH_MAX_RESULTS,
load_all_available_meta=True,
),
)
# Get the selected search tool
def get_web_search_tool(max_search_results: int):
if SELECTED_SEARCH_ENGINE == SearchEngine.TAVILY.value:
return LoggedTavilySearch(
name="web_search",
max_results=max_search_results,
include_raw_content=True,
include_images=True,
include_image_descriptions=True,
)
elif SELECTED_SEARCH_ENGINE == SearchEngine.DUCKDUCKGO.value:
return LoggedDuckDuckGoSearch(name="web_search", max_results=max_search_results)
elif SELECTED_SEARCH_ENGINE == SearchEngine.BRAVE_SEARCH.value:
return LoggedBraveSearch(
name="web_search",
search_wrapper=BraveSearchWrapper(
api_key=os.getenv("BRAVE_SEARCH_API_KEY", ""),
search_kwargs={"count": max_search_results},
),
)
elif SELECTED_SEARCH_ENGINE == SearchEngine.ARXIV.value:
return LoggedArxivSearch(
name="web_search",
api_wrapper=ArxivAPIWrapper(
top_k_results=max_search_results,
load_max_docs=max_search_results,
load_all_available_meta=True,
),
)
else:
raise ValueError(f"Unsupported search engine: {SELECTED_SEARCH_ENGINE}")
if __name__ == "__main__":
results = LoggedDuckDuckGoSearch(
name="web_search", max_results=SEARCH_MAX_RESULTS, output_format="list"
).invoke("cute panda")
print(json.dumps(results, indent=2, ensure_ascii=False))
name="web_search", max_results=3, output_format="list"
)
print(results.name)
print(results.description)
print(results.args)
# .invoke("cute panda")
# print(json.dumps(results, indent=2, ensure_ascii=False))
@@ -70,7 +70,7 @@ class EnhancedTavilySearchAPIWrapper(OriginalTavilySearchAPIWrapper):
"include_images": include_images,
"include_image_descriptions": include_image_descriptions,
}
async with aiohttp.ClientSession() as session:
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post(f"{TAVILY_API_URL}/search", json=params) as res:
if res.status == 200:
data = await res.text()
+2 -1
View File
@@ -102,7 +102,8 @@ class VolcengineTTS:
}
try:
logger.debug(f"Sending TTS request for text: {text[:50]}...")
sanitized_text = text.replace("\r\n", "").replace("\n", "")
logger.debug(f"Sending TTS request for text: {sanitized_text[:50]}...")
response = requests.post(
self.api_url, json.dumps(request_json), headers=self.header
)
+24
View File
@@ -0,0 +1,24 @@
#!/usr/bin/env python3
"""
This script manually patches sys.modules to fix the LLM import issue
so that tests can run without requiring LLM configuration.
"""
import sys
from unittest.mock import MagicMock
# Create mocks
mock_llm = MagicMock()
mock_llm.invoke.return_value = "Mock LLM response"
# Create a mock module for llm.py
mock_llm_module = MagicMock()
mock_llm_module.get_llm_by_type = lambda llm_type: mock_llm
mock_llm_module.basic_llm = mock_llm
mock_llm_module._create_llm_use_conf = lambda llm_type, conf: mock_llm
# Set the mock module
sys.modules["src.llms.llm"] = mock_llm_module
print("Successfully patched LLM module. You can now run your tests.")
print("Example: uv run pytest tests/test_types.py -v")
+125
View File
@@ -0,0 +1,125 @@
import json
import pytest
from unittest.mock import patch, MagicMock
# 在这里 mock 掉 get_llm_by_type,避免 ValueError
with patch("src.llms.llm.get_llm_by_type", return_value=MagicMock()):
from langgraph.types import Command
from src.graph.nodes import background_investigation_node
from src.config import SearchEngine
from langchain_core.messages import HumanMessage
# Mock data
MOCK_SEARCH_RESULTS = [
{"title": "Test Title 1", "content": "Test Content 1"},
{"title": "Test Title 2", "content": "Test Content 2"},
]
@pytest.fixture
def mock_state():
return {
"messages": [HumanMessage(content="test query")],
"research_topic": "test query",
"background_investigation_results": None,
}
@pytest.fixture
def mock_configurable():
mock = MagicMock()
mock.max_search_results = 5
return mock
@pytest.fixture
def mock_config():
# 你可以根据实际需要返回一个 MagicMock 或 dict
return MagicMock()
@pytest.fixture
def patch_config_from_runnable_config(mock_configurable):
with patch(
"src.graph.nodes.Configuration.from_runnable_config",
return_value=mock_configurable,
):
yield
@pytest.fixture
def mock_tavily_search():
with patch("src.graph.nodes.LoggedTavilySearch") as mock:
instance = mock.return_value
instance.invoke.return_value = [
{"title": "Test Title 1", "content": "Test Content 1"},
{"title": "Test Title 2", "content": "Test Content 2"},
]
yield mock
@pytest.fixture
def mock_web_search_tool():
with patch("src.graph.nodes.get_web_search_tool") as mock:
instance = mock.return_value
instance.invoke.return_value = [
{"title": "Test Title 1", "content": "Test Content 1"},
{"title": "Test Title 2", "content": "Test Content 2"},
]
yield mock
@pytest.mark.parametrize("search_engine", [SearchEngine.TAVILY.value, "other"])
def test_background_investigation_node_tavily(
mock_state,
mock_tavily_search,
mock_web_search_tool,
search_engine,
patch_config_from_runnable_config,
mock_config,
):
"""Test background_investigation_node with Tavily search engine"""
with patch("src.graph.nodes.SELECTED_SEARCH_ENGINE", search_engine):
result = background_investigation_node(mock_state, mock_config)
# Verify the result structure
assert isinstance(result, dict)
# Verify the update contains background_investigation_results
assert "background_investigation_results" in result
# Parse and verify the JSON content
results = result["background_investigation_results"]
if search_engine == SearchEngine.TAVILY.value:
mock_tavily_search.return_value.invoke.assert_called_once_with("test query")
assert (
results
== "## Test Title 1\n\nTest Content 1\n\n## Test Title 2\n\nTest Content 2"
)
else:
mock_web_search_tool.return_value.invoke.assert_called_once_with(
"test query"
)
assert len(json.loads(results)) == 2
def test_background_investigation_node_malformed_response(
mock_state, mock_tavily_search, patch_config_from_runnable_config, mock_config
):
"""Test background_investigation_node with malformed Tavily response"""
with patch("src.graph.nodes.SELECTED_SEARCH_ENGINE", SearchEngine.TAVILY.value):
# Mock a malformed response
mock_tavily_search.return_value.invoke.return_value = "invalid response"
result = background_investigation_node(mock_state, mock_config)
# Verify the result structure
assert isinstance(result, dict)
# Verify the update contains background_investigation_results
assert "background_investigation_results" in result
# Parse and verify the JSON content
results = result["background_investigation_results"]
assert json.loads(results) is None
+37
View File
@@ -106,3 +106,40 @@ def test_current_time_format():
assert any(
line.strip().startswith("CURRENT_TIME:") for line in system_content.split("\n")
)
def test_apply_prompt_template_reporter():
"""Test reporter template rendering with different styles and locale"""
test_state_news = {
"messages": [],
"task": "test reporter task",
"workspace_context": "test reporter context",
"report_style": "news",
"locale": "en-US",
}
messages_news = apply_prompt_template("reporter", test_state_news)
system_content_news = messages_news[0]["content"]
assert "NBC News" in system_content_news
test_state_social_media_en = {
"messages": [],
"task": "test reporter task",
"workspace_context": "test reporter context",
"report_style": "social_media",
"locale": "en-US",
}
messages_default = apply_prompt_template("reporter", test_state_social_media_en)
system_content_default = messages_default[0]["content"]
assert "Twitter/X" in system_content_default
test_state_social_media_cn = {
"messages": [],
"task": "test reporter task",
"workspace_context": "test reporter context",
"report_style": "social_media",
"locale": "zh-CN",
}
messages_cn = apply_prompt_template("reporter", test_state_social_media_cn)
system_content_cn = messages_cn[0]["content"]
assert "小红书" in system_content_cn
+134
View File
@@ -0,0 +1,134 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
import sys
import os
from typing import Annotated, List, Optional
# Import MessagesState directly from langgraph rather than through our application
from langgraph.graph import MessagesState
# Create stub versions of Plan/Step/StepType to avoid dependencies
class StepType:
RESEARCH = "research"
PROCESSING = "processing"
class Step:
def __init__(self, need_search, title, description, step_type):
self.need_search = need_search
self.title = title
self.description = description
self.step_type = step_type
class Plan:
def __init__(self, locale, has_enough_context, thought, title, steps):
self.locale = locale
self.has_enough_context = has_enough_context
self.thought = thought
self.title = title
self.steps = steps
# Import the actual State class by loading the module directly
# This avoids the cascade of imports that would normally happen
def load_state_class():
# Get the absolute path to the types.py file
src_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "src"))
types_path = os.path.join(src_dir, "graph", "types.py")
# Create a namespace for the module
import types
module_name = "src.graph.types_direct"
spec = types.ModuleType(module_name)
# Add the module to sys.modules to avoid import loops
sys.modules[module_name] = spec
# Set up the namespace with required imports
spec.__dict__["operator"] = __import__("operator")
spec.__dict__["Annotated"] = Annotated
spec.__dict__["MessagesState"] = MessagesState
spec.__dict__["Plan"] = Plan
# Execute the module code
with open(types_path, "r") as f:
module_code = f.read()
exec(module_code, spec.__dict__)
# Return the State class
return spec.State
# Load the actual State class
State = load_state_class()
def test_state_initialization():
"""Test that State class has correct default attribute definitions."""
# Test that the class has the expected attribute definitions
assert State.locale == "en-US"
assert State.observations == []
assert State.plan_iterations == 0
assert State.current_plan is None
assert State.final_report == ""
assert State.auto_accepted_plan is False
assert State.enable_background_investigation is True
assert State.background_investigation_results is None
# Verify state initialization
state = State(messages=[])
assert "messages" in state
# Without explicitly passing attributes, they're not in the state
assert "locale" not in state
assert "observations" not in state
def test_state_with_custom_values():
"""Test that State can be initialized with custom values."""
test_step = Step(
need_search=True,
title="Test Step",
description="Step description",
step_type=StepType.RESEARCH,
)
test_plan = Plan(
locale="en-US",
has_enough_context=False,
thought="Test thought",
title="Test Plan",
steps=[test_step],
)
# Initialize state with custom values and required messages field
state = State(
messages=[],
locale="fr-FR",
observations=["Observation 1"],
plan_iterations=2,
current_plan=test_plan,
final_report="Test report",
auto_accepted_plan=True,
enable_background_investigation=False,
background_investigation_results="Test results",
)
# Access state keys - these are explicitly initialized
assert state["locale"] == "fr-FR"
assert state["observations"] == ["Observation 1"]
assert state["plan_iterations"] == 2
assert state["current_plan"].title == "Test Plan"
assert state["current_plan"].thought == "Test thought"
assert len(state["current_plan"].steps) == 1
assert state["current_plan"].steps[0].title == "Test Step"
assert state["final_report"] == "Test report"
assert state["auto_accepted_plan"] is True
assert state["enable_background_investigation"] is False
assert state["background_investigation_results"] == "Test results"
+95
View File
@@ -0,0 +1,95 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import pytest
import sys
import types
from pathlib import Path
import builtins
import importlib
from src.config.configuration import Configuration
# Patch sys.path so relative import works
# Patch Resource for import
mock_resource = type("Resource", (), {})
# Patch src.rag.retriever.Resource for import
module_name = "src.rag.retriever"
if module_name not in sys.modules:
retriever_mod = types.ModuleType(module_name)
retriever_mod.Resource = mock_resource
sys.modules[module_name] = retriever_mod
# Relative import of Configuration
def test_default_configuration():
config = Configuration()
assert config.resources == []
assert config.max_plan_iterations == 1
assert config.max_step_num == 3
assert config.max_search_results == 3
assert config.mcp_settings is None
def test_from_runnable_config_with_config_dict(monkeypatch):
config_dict = {
"configurable": {
"max_plan_iterations": 5,
"max_step_num": 7,
"max_search_results": 10,
"mcp_settings": {"foo": "bar"},
}
}
config = Configuration.from_runnable_config(config_dict)
assert config.max_plan_iterations == 5
assert config.max_step_num == 7
assert config.max_search_results == 10
assert config.mcp_settings == {"foo": "bar"}
def test_from_runnable_config_with_env_override(monkeypatch):
monkeypatch.setenv("MAX_PLAN_ITERATIONS", "9")
monkeypatch.setenv("MAX_STEP_NUM", "11")
config_dict = {
"configurable": {
"max_plan_iterations": 2,
"max_step_num": 3,
"max_search_results": 4,
}
}
config = Configuration.from_runnable_config(config_dict)
# Environment variables take precedence and are strings
assert config.max_plan_iterations == "9"
assert config.max_step_num == "11"
assert config.max_search_results == 4 # not overridden
# Clean up
monkeypatch.delenv("MAX_PLAN_ITERATIONS")
monkeypatch.delenv("MAX_STEP_NUM")
def test_from_runnable_config_with_none_and_falsy(monkeypatch):
config_dict = {
"configurable": {
"max_plan_iterations": None,
"max_step_num": 0, # falsy, should be skipped
"max_search_results": "",
}
}
config = Configuration.from_runnable_config(config_dict)
# Should fall back to defaults for skipped/falsy values
assert config.max_plan_iterations == 1
assert config.max_step_num == 3
assert config.max_search_results == 3
def test_from_runnable_config_with_no_config():
config = Configuration.from_runnable_config()
assert config.max_plan_iterations == 1
assert config.max_step_num == 3
assert config.max_search_results == 3
assert config.resources == []
assert config.mcp_settings is None
+83
View File
@@ -0,0 +1,83 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import tempfile
import yaml
import pytest
from src.config.loader import load_yaml_config, process_dict, replace_env_vars
def test_replace_env_vars_with_env(monkeypatch):
monkeypatch.setenv("TEST_ENV", "env_value")
assert replace_env_vars("$TEST_ENV") == "env_value"
def test_replace_env_vars_without_env(monkeypatch):
monkeypatch.delenv("NOT_SET_ENV", raising=False)
assert replace_env_vars("$NOT_SET_ENV") == "NOT_SET_ENV"
def test_replace_env_vars_non_string():
assert replace_env_vars(123) == 123
def test_replace_env_vars_regular_string():
assert replace_env_vars("no_env") == "no_env"
def test_process_dict_nested(monkeypatch):
monkeypatch.setenv("FOO", "bar")
config = {"a": "$FOO", "b": {"c": "$FOO", "d": 42, "e": "$NOT_SET_ENV"}}
processed = process_dict(config)
assert processed["a"] == "bar"
assert processed["b"]["c"] == "bar"
assert processed["b"]["d"] == 42
assert processed["b"]["e"] == "NOT_SET_ENV"
def test_process_dict_empty():
assert process_dict({}) == {}
def test_load_yaml_config_file_not_exist():
assert load_yaml_config("non_existent_file.yaml") == {}
def test_load_yaml_config(monkeypatch):
monkeypatch.setenv("MY_ENV", "my_value")
yaml_content = """
key1: value1
key2: $MY_ENV
nested:
key3: $MY_ENV
key4: 123
"""
with tempfile.NamedTemporaryFile("w+", delete=False) as tmp:
tmp.write(yaml_content)
tmp_path = tmp.name
try:
config = load_yaml_config(tmp_path)
assert config["key1"] == "value1"
assert config["key2"] == "my_value"
assert config["nested"]["key3"] == "my_value"
assert config["nested"]["key4"] == 123
finally:
os.remove(tmp_path)
def test_load_yaml_config_cache(monkeypatch):
monkeypatch.setenv("CACHE_ENV", "cache_value")
yaml_content = "foo: $CACHE_ENV"
with tempfile.NamedTemporaryFile("w+", delete=False) as tmp:
tmp.write(yaml_content)
tmp_path = tmp.name
try:
config1 = load_yaml_config(tmp_path)
config2 = load_yaml_config(tmp_path)
assert config1 is config2 # Should be cached (same object)
assert config1["foo"] == "cache_value"
finally:
os.remove(tmp_path)
+74
View File
@@ -0,0 +1,74 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.crawler.article import Article
class DummyMarkdownify:
"""A dummy markdownify replacement for patching if needed."""
@staticmethod
def markdownify(html):
return html
def test_to_markdown_includes_title(monkeypatch):
article = Article("Test Title", "<p>Hello <b>world</b>!</p>")
result = article.to_markdown(including_title=True)
assert result.startswith("# Test Title")
assert "Hello" in result
def test_to_markdown_excludes_title():
article = Article("Test Title", "<p>Hello <b>world</b>!</p>")
result = article.to_markdown(including_title=False)
assert not result.startswith("# Test Title")
assert "Hello" in result
def test_to_message_with_text_only():
article = Article("Test Title", "<p>Hello world!</p>")
article.url = "https://example.com/"
result = article.to_message()
assert isinstance(result, list)
assert any(item["type"] == "text" for item in result)
assert all("type" in item for item in result)
def test_to_message_with_image(monkeypatch):
html = '<p>Intro</p><img src="img/pic.png"/>'
article = Article("Title", html)
article.url = "https://host.com/path/"
# The markdownify library will convert <img> to markdown image syntax
result = article.to_message()
# Should have both text and image_url types
types = [item["type"] for item in result]
assert "image_url" in types
assert "text" in types
# Check that the image_url is correctly joined
image_items = [item for item in result if item["type"] == "image_url"]
assert image_items
assert image_items[0]["image_url"]["url"] == "https://host.com/path/img/pic.png"
def test_to_message_multiple_images():
html = '<p>Start</p><img src="a.png"/><p>Mid</p><img src="b.jpg"/>End'
article = Article("Title", html)
article.url = "http://x/"
result = article.to_message()
image_urls = [
item["image_url"]["url"] for item in result if item["type"] == "image_url"
]
assert "http://x/a.png" in image_urls
assert "http://x/b.jpg" in image_urls
text_items = [item for item in result if item["type"] == "text"]
assert any("Start" in item["text"] for item in text_items)
assert any("Mid" in item["text"] for item in text_items)
def test_to_message_handles_empty_html():
article = Article("Empty", "")
article.url = "http://test/"
result = article.to_message()
assert isinstance(result, list)
assert result[0]["type"] == "text"
+72
View File
@@ -0,0 +1,72 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
import src.crawler as crawler_module
from src.crawler import Crawler
def test_crawler_sets_article_url(monkeypatch):
"""Test that the crawler sets the article.url field correctly."""
class DummyArticle:
def __init__(self):
self.url = None
def to_markdown(self):
return "# Dummy"
class DummyJinaClient:
def crawl(self, url, return_format=None):
return "<html>dummy</html>"
class DummyReadabilityExtractor:
def extract_article(self, html):
return DummyArticle()
monkeypatch.setattr("src.crawler.crawler.JinaClient", DummyJinaClient)
monkeypatch.setattr(
"src.crawler.crawler.ReadabilityExtractor", DummyReadabilityExtractor
)
crawler = crawler_module.Crawler()
url = "http://example.com"
article = crawler.crawl(url)
assert article.url == url
assert article.to_markdown() == "# Dummy"
def test_crawler_calls_dependencies(monkeypatch):
"""Test that Crawler calls JinaClient.crawl and ReadabilityExtractor.extract_article."""
calls = {}
class DummyJinaClient:
def crawl(self, url, return_format=None):
calls["jina"] = (url, return_format)
return "<html>dummy</html>"
class DummyReadabilityExtractor:
def extract_article(self, html):
calls["extractor"] = html
class DummyArticle:
url = None
def to_markdown(self):
return "# Dummy"
return DummyArticle()
monkeypatch.setattr("src.crawler.crawler.JinaClient", DummyJinaClient)
monkeypatch.setattr(
"src.crawler.crawler.ReadabilityExtractor", DummyReadabilityExtractor
)
crawler = crawler_module.Crawler()
url = "http://example.com"
crawler.crawl(url)
assert "jina" in calls
assert calls["jina"][0] == url
assert calls["jina"][1] == "html"
assert "extractor" in calls
assert calls["extractor"] == "<html>dummy</html>"
+70
View File
@@ -0,0 +1,70 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import types
import pytest
from src.llms import llm
class DummyChatOpenAI:
def __init__(self, **kwargs):
self.kwargs = kwargs
def invoke(self, msg):
return f"Echo: {msg}"
@pytest.fixture(autouse=True)
def patch_chat_openai(monkeypatch):
monkeypatch.setattr(llm, "ChatOpenAI", DummyChatOpenAI)
@pytest.fixture
def dummy_conf():
return {
"BASIC_MODEL": {"api_key": "test_key", "base_url": "http://test"},
"REASONING_MODEL": {"api_key": "reason_key"},
"VISION_MODEL": {"api_key": "vision_key"},
}
def test_get_env_llm_conf(monkeypatch):
monkeypatch.setenv("BASIC_MODEL__API_KEY", "env_key")
monkeypatch.setenv("BASIC_MODEL__BASE_URL", "http://env")
conf = llm._get_env_llm_conf("basic")
assert conf["api_key"] == "env_key"
assert conf["base_url"] == "http://env"
def test_create_llm_use_conf_merges_env(monkeypatch, dummy_conf):
monkeypatch.setenv("BASIC_MODEL__API_KEY", "env_key")
result = llm._create_llm_use_conf("basic", dummy_conf)
assert isinstance(result, DummyChatOpenAI)
assert result.kwargs["api_key"] == "env_key"
assert result.kwargs["base_url"] == "http://test"
def test_create_llm_use_conf_invalid_type(dummy_conf):
with pytest.raises(ValueError):
llm._create_llm_use_conf("unknown", dummy_conf)
def test_create_llm_use_conf_empty_conf(monkeypatch):
with pytest.raises(ValueError):
llm._create_llm_use_conf("basic", {})
def test_get_llm_by_type_caches(monkeypatch, dummy_conf):
called = {}
def fake_load_yaml_config(path):
called["called"] = True
return dummy_conf
monkeypatch.setattr(llm, "load_yaml_config", fake_load_yaml_config)
llm._llm_cache.clear()
inst1 = llm.get_llm_by_type("basic")
inst2 = llm.get_llm_by_type("basic")
assert inst1 is inst2
assert called["called"]
+2
View File
@@ -0,0 +1,2 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
@@ -0,0 +1,2 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
@@ -0,0 +1,156 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from unittest.mock import patch, MagicMock
from src.prompt_enhancer.graph.builder import build_graph
from src.prompt_enhancer.graph.state import PromptEnhancerState
from src.config.report_style import ReportStyle
class TestBuildGraph:
"""Test cases for build_graph function."""
@patch("src.prompt_enhancer.graph.builder.StateGraph")
def test_build_graph_structure(self, mock_state_graph):
"""Test that build_graph creates the correct graph structure."""
mock_builder = MagicMock()
mock_compiled_graph = MagicMock()
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
result = build_graph()
# Verify StateGraph was created with correct state type
mock_state_graph.assert_called_once_with(PromptEnhancerState)
# Verify entry point was set
mock_builder.set_entry_point.assert_called_once_with("enhancer")
# Verify finish point was set
mock_builder.set_finish_point.assert_called_once_with("enhancer")
# Verify graph was compiled
mock_builder.compile.assert_called_once()
# Verify return value
assert result == mock_compiled_graph
@patch("src.prompt_enhancer.graph.builder.StateGraph")
@patch("src.prompt_enhancer.graph.builder.prompt_enhancer_node")
def test_build_graph_node_function(self, mock_enhancer_node, mock_state_graph):
"""Test that the correct node function is added to the graph."""
mock_builder = MagicMock()
mock_compiled_graph = MagicMock()
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
result = build_graph()
# Verify the correct node function was added
mock_builder.add_node.assert_called_once_with("enhancer", mock_enhancer_node)
def test_build_graph_returns_compiled_graph(self):
"""Test that build_graph returns a compiled graph object."""
with patch("src.prompt_enhancer.graph.builder.StateGraph") as mock_state_graph:
mock_builder = MagicMock()
mock_compiled_graph = MagicMock()
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
result = build_graph()
assert result is mock_compiled_graph
@patch("src.prompt_enhancer.graph.builder.StateGraph")
def test_build_graph_call_sequence(self, mock_state_graph):
"""Test that build_graph calls methods in the correct sequence."""
mock_builder = MagicMock()
mock_compiled_graph = MagicMock()
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
# Track call order
call_order = []
def track_add_node(*args, **kwargs):
call_order.append("add_node")
def track_set_entry_point(*args, **kwargs):
call_order.append("set_entry_point")
def track_set_finish_point(*args, **kwargs):
call_order.append("set_finish_point")
def track_compile(*args, **kwargs):
call_order.append("compile")
return mock_compiled_graph
mock_builder.add_node.side_effect = track_add_node
mock_builder.set_entry_point.side_effect = track_set_entry_point
mock_builder.set_finish_point.side_effect = track_set_finish_point
mock_builder.compile.side_effect = track_compile
build_graph()
# Verify the correct call sequence
expected_order = ["add_node", "set_entry_point", "set_finish_point", "compile"]
assert call_order == expected_order
def test_build_graph_integration(self):
"""Integration test to verify the graph can be built without mocking."""
# This test verifies that all imports and dependencies are correct
try:
graph = build_graph()
assert graph is not None
# The graph should be a compiled LangGraph object
assert hasattr(graph, "invoke") or hasattr(graph, "stream")
except ImportError as e:
pytest.skip(f"Skipping integration test due to missing dependencies: {e}")
except Exception as e:
# If there are configuration issues (like missing LLM config),
# we still consider the test successful if the graph structure is built
if "LLM" in str(e) or "configuration" in str(e).lower():
pytest.skip(
f"Skipping integration test due to configuration issues: {e}"
)
else:
raise
@patch("src.prompt_enhancer.graph.builder.StateGraph")
def test_build_graph_single_node_workflow(self, mock_state_graph):
"""Test that the graph is configured as a single-node workflow."""
mock_builder = MagicMock()
mock_compiled_graph = MagicMock()
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
build_graph()
# Verify only one node is added
assert mock_builder.add_node.call_count == 1
# Verify entry and finish points are the same node
mock_builder.set_entry_point.assert_called_once_with("enhancer")
mock_builder.set_finish_point.assert_called_once_with("enhancer")
@patch("src.prompt_enhancer.graph.builder.StateGraph")
def test_build_graph_state_type(self, mock_state_graph):
"""Test that the graph is initialized with the correct state type."""
mock_builder = MagicMock()
mock_compiled_graph = MagicMock()
mock_state_graph.return_value = mock_builder
mock_builder.compile.return_value = mock_compiled_graph
build_graph()
# Verify StateGraph was initialized with PromptEnhancerState
args, kwargs = mock_state_graph.call_args
assert args[0] == PromptEnhancerState
@@ -0,0 +1,219 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from unittest.mock import patch, MagicMock
from langchain.schema import HumanMessage, SystemMessage
from src.prompt_enhancer.graph.enhancer_node import prompt_enhancer_node
from src.prompt_enhancer.graph.state import PromptEnhancerState
from src.config.report_style import ReportStyle
@pytest.fixture
def mock_llm():
"""Mock LLM that returns a test response."""
llm = MagicMock()
llm.invoke.return_value = MagicMock(content="Enhanced test prompt")
return llm
@pytest.fixture
def mock_messages():
"""Mock messages returned by apply_prompt_template."""
return [
SystemMessage(content="System prompt template"),
HumanMessage(content="Test human message"),
]
class TestPromptEnhancerNode:
"""Test cases for prompt_enhancer_node function."""
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_basic_prompt_enhancement(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test basic prompt enhancement without context or report style."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(prompt="Write about AI")
result = prompt_enhancer_node(state)
# Verify LLM was called
mock_get_llm.assert_called_once_with("basic")
mock_llm.invoke.assert_called_once_with(mock_messages)
# Verify apply_prompt_template was called correctly
mock_apply_template.assert_called_once()
call_args = mock_apply_template.call_args
assert call_args[0][0] == "prompt_enhancer/prompt_enhancer"
assert "messages" in call_args[0][1]
assert "report_style" in call_args[0][1]
# Verify result
assert result == {"output": "Enhanced test prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_prompt_enhancement_with_report_style(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test prompt enhancement with report style."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(
prompt="Write about AI", report_style=ReportStyle.ACADEMIC
)
result = prompt_enhancer_node(state)
# Verify apply_prompt_template was called with report_style
mock_apply_template.assert_called_once()
call_args = mock_apply_template.call_args
assert call_args[0][0] == "prompt_enhancer/prompt_enhancer"
assert call_args[0][1]["report_style"] == ReportStyle.ACADEMIC
# Verify result
assert result == {"output": "Enhanced test prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_prompt_enhancement_with_context(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test prompt enhancement with additional context."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
state = PromptEnhancerState(
prompt="Write about AI", context="Focus on machine learning applications"
)
result = prompt_enhancer_node(state)
# Verify apply_prompt_template was called
mock_apply_template.assert_called_once()
call_args = mock_apply_template.call_args
# Check that the context was included in the human message
messages_arg = call_args[0][1]["messages"]
assert len(messages_arg) == 1
human_message = messages_arg[0]
assert isinstance(human_message, HumanMessage)
assert "Focus on machine learning applications" in human_message.content
assert result == {"output": "Enhanced test prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_error_handling(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test error handling when LLM call fails."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
# Mock LLM to raise an exception
mock_llm.invoke.side_effect = Exception("LLM error")
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
# Should return original prompt on error
assert result == {"output": "Test prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_template_error_handling(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test error handling when template application fails."""
mock_get_llm.return_value = mock_llm
# Mock apply_prompt_template to raise an exception
mock_apply_template.side_effect = Exception("Template error")
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
# Should return original prompt on error
assert result == {"output": "Test prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_prefix_removal(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test that common prefixes are removed from LLM response."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
# Test different prefixes that should be removed
test_cases = [
"Enhanced Prompt: This is the enhanced prompt",
"Enhanced prompt: This is the enhanced prompt",
"Here's the enhanced prompt: This is the enhanced prompt",
"Here is the enhanced prompt: This is the enhanced prompt",
"**Enhanced Prompt**: This is the enhanced prompt",
"**Enhanced prompt**: This is the enhanced prompt",
]
for response_with_prefix in test_cases:
mock_llm.invoke.return_value = MagicMock(content=response_with_prefix)
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": "This is the enhanced prompt"}
@patch("src.prompt_enhancer.graph.enhancer_node.apply_prompt_template")
@patch("src.prompt_enhancer.graph.enhancer_node.get_llm_by_type")
@patch(
"src.prompt_enhancer.graph.enhancer_node.AGENT_LLM_MAP",
{"prompt_enhancer": "basic"},
)
def test_whitespace_handling(
self, mock_get_llm, mock_apply_template, mock_llm, mock_messages
):
"""Test that whitespace is properly stripped from LLM response."""
mock_get_llm.return_value = mock_llm
mock_apply_template.return_value = mock_messages
# Mock LLM response with extra whitespace
mock_llm.invoke.return_value = MagicMock(
content=" \n\n Enhanced prompt \n\n "
)
state = PromptEnhancerState(prompt="Test prompt")
result = prompt_enhancer_node(state)
assert result == {"output": "Enhanced prompt"}
@@ -0,0 +1,108 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.prompt_enhancer.graph.state import PromptEnhancerState
from src.config.report_style import ReportStyle
def test_prompt_enhancer_state_creation():
"""Test that PromptEnhancerState can be created with required fields."""
state = PromptEnhancerState(
prompt="Test prompt", context=None, report_style=None, output=None
)
assert state["prompt"] == "Test prompt"
assert state["context"] is None
assert state["report_style"] is None
assert state["output"] is None
def test_prompt_enhancer_state_with_all_fields():
"""Test PromptEnhancerState with all fields populated."""
state = PromptEnhancerState(
prompt="Write about AI",
context="Additional context about AI research",
report_style=ReportStyle.ACADEMIC,
output="Enhanced prompt about AI research",
)
assert state["prompt"] == "Write about AI"
assert state["context"] == "Additional context about AI research"
assert state["report_style"] == ReportStyle.ACADEMIC
assert state["output"] == "Enhanced prompt about AI research"
def test_prompt_enhancer_state_minimal():
"""Test PromptEnhancerState with only required prompt field."""
state = PromptEnhancerState(prompt="Minimal prompt")
assert state["prompt"] == "Minimal prompt"
# Optional fields should not be present if not specified
assert "context" not in state
assert "report_style" not in state
assert "output" not in state
def test_prompt_enhancer_state_with_different_report_styles():
"""Test PromptEnhancerState with different ReportStyle values."""
styles = [
ReportStyle.ACADEMIC,
ReportStyle.POPULAR_SCIENCE,
ReportStyle.NEWS,
ReportStyle.SOCIAL_MEDIA,
]
for style in styles:
state = PromptEnhancerState(prompt="Test prompt", report_style=style)
assert state["report_style"] == style
def test_prompt_enhancer_state_update():
"""Test updating PromptEnhancerState fields."""
state = PromptEnhancerState(prompt="Original prompt")
# Update with new fields
state.update(
{
"context": "New context",
"report_style": ReportStyle.NEWS,
"output": "Enhanced output",
}
)
assert state["prompt"] == "Original prompt"
assert state["context"] == "New context"
assert state["report_style"] == ReportStyle.NEWS
assert state["output"] == "Enhanced output"
def test_prompt_enhancer_state_get_method():
"""Test using get() method on PromptEnhancerState."""
state = PromptEnhancerState(prompt="Test prompt", report_style=ReportStyle.ACADEMIC)
# Test get with existing keys
assert state.get("prompt") == "Test prompt"
assert state.get("report_style") == ReportStyle.ACADEMIC
# Test get with non-existing keys
assert state.get("context") is None
assert state.get("output") is None
assert state.get("nonexistent", "default") == "default"
def test_prompt_enhancer_state_type_annotations():
"""Test that the state accepts correct types."""
# This test ensures the TypedDict structure is working correctly
state = PromptEnhancerState(
prompt="Test prompt",
context="Test context",
report_style=ReportStyle.POPULAR_SCIENCE,
output="Test output",
)
# Verify types
assert isinstance(state["prompt"], str)
assert isinstance(state["context"], str)
assert isinstance(state["report_style"], ReportStyle)
assert isinstance(state["output"], str)
+181
View File
@@ -0,0 +1,181 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import os
import pytest
import requests
from unittest.mock import patch, MagicMock
from src.rag.ragflow import RAGFlowProvider, parse_uri
# Dummy classes to mock dependencies
class DummyResource:
def __init__(self, uri, title="", description=""):
self.uri = uri
self.title = title
self.description = description
class DummyChunk:
def __init__(self, content, similarity):
self.content = content
self.similarity = similarity
class DummyDocument:
def __init__(self, id, title, chunks=None):
self.id = id
self.title = title
self.chunks = chunks or []
# Patch imports in ragflow.py to use dummy classes
@pytest.fixture(autouse=True)
def patch_imports(monkeypatch):
import src.rag.ragflow as ragflow
ragflow.Resource = DummyResource
ragflow.Chunk = DummyChunk
ragflow.Document = DummyDocument
yield
def test_parse_uri_valid():
uri = "rag://dataset/123#abc"
dataset_id, document_id = parse_uri(uri)
assert dataset_id == "123"
assert document_id == "abc"
def test_parse_uri_invalid():
with pytest.raises(ValueError):
parse_uri("http://dataset/123#abc")
def test_init_env_vars(monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
monkeypatch.delenv("RAGFLOW_PAGE_SIZE", raising=False)
provider = RAGFlowProvider()
assert provider.api_url == "http://api"
assert provider.api_key == "key"
assert provider.page_size == 10
def test_init_page_size(monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
monkeypatch.setenv("RAGFLOW_PAGE_SIZE", "5")
provider = RAGFlowProvider()
assert provider.page_size == 5
def test_init_missing_env(monkeypatch):
monkeypatch.delenv("RAGFLOW_API_URL", raising=False)
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
with pytest.raises(ValueError):
RAGFlowProvider()
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.delenv("RAGFLOW_API_KEY", raising=False)
with pytest.raises(ValueError):
RAGFlowProvider()
@patch("src.rag.ragflow.requests.post")
def test_query_relevant_documents_success(mock_post, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
provider = RAGFlowProvider()
resource = DummyResource("rag://dataset/123#doc456")
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
"data": {
"doc_aggs": [{"doc_id": "doc456", "doc_name": "Doc Title"}],
"chunks": [
{"document_id": "doc456", "content": "chunk text", "similarity": 0.9}
],
}
}
mock_post.return_value = mock_response
docs = provider.query_relevant_documents("query", [resource])
assert len(docs) == 1
assert docs[0].id == "doc456"
assert docs[0].title == "Doc Title"
assert len(docs[0].chunks) == 1
assert docs[0].chunks[0].content == "chunk text"
assert docs[0].chunks[0].similarity == 0.9
@patch("src.rag.ragflow.requests.post")
def test_query_relevant_documents_error(mock_post, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
provider = RAGFlowProvider()
mock_response = MagicMock()
mock_response.status_code = 400
mock_response.text = "error"
mock_post.return_value = mock_response
with pytest.raises(Exception):
provider.query_relevant_documents("query", [])
@patch("src.rag.ragflow.requests.get")
def test_list_resources_success(mock_get, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
provider = RAGFlowProvider()
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
"data": [
{"id": "123", "name": "Dataset1", "description": "desc1"},
{"id": "456", "name": "Dataset2", "description": "desc2"},
]
}
mock_get.return_value = mock_response
resources = provider.list_resources()
assert len(resources) == 2
assert resources[0].uri == "rag://dataset/123"
assert resources[0].title == "Dataset1"
assert resources[0].description == "desc1"
assert resources[1].uri == "rag://dataset/456"
assert resources[1].title == "Dataset2"
assert resources[1].description == "desc2"
@patch("src.rag.ragflow.requests.get")
def test_list_resources_success(mock_get, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
provider = RAGFlowProvider()
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {
"data": [
{"id": "123", "name": "Dataset1", "description": "desc1"},
{"id": "456", "name": "Dataset2", "description": "desc2"},
]
}
mock_get.return_value = mock_response
resources = provider.list_resources()
assert len(resources) == 2
assert resources[0].uri == "rag://dataset/123"
assert resources[0].title == "Dataset1"
assert resources[0].description == "desc1"
assert resources[1].uri == "rag://dataset/456"
assert resources[1].title == "Dataset2"
assert resources[1].description == "desc2"
@patch("src.rag.ragflow.requests.get")
def test_list_resources_error(mock_get, monkeypatch):
monkeypatch.setenv("RAGFLOW_API_URL", "http://api")
monkeypatch.setenv("RAGFLOW_API_KEY", "key")
provider = RAGFlowProvider()
mock_response = MagicMock()
mock_response.status_code = 500
mock_response.text = "fail"
mock_get.return_value = mock_response
with pytest.raises(Exception):
provider.list_resources()
+72
View File
@@ -0,0 +1,72 @@
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT
import pytest
from src.rag.retriever import Chunk, Document, Resource, Retriever
def test_chunk_init():
chunk = Chunk(content="test content", similarity=0.9)
assert chunk.content == "test content"
assert chunk.similarity == 0.9
def test_document_init_and_to_dict():
chunk1 = Chunk(content="chunk1", similarity=0.8)
chunk2 = Chunk(content="chunk2", similarity=0.7)
doc = Document(
id="doc1", url="http://example.com", title="Title", chunks=[chunk1, chunk2]
)
assert doc.id == "doc1"
assert doc.url == "http://example.com"
assert doc.title == "Title"
assert doc.chunks == [chunk1, chunk2]
d = doc.to_dict()
assert d["id"] == "doc1"
assert d["content"] == "chunk1\n\nchunk2"
assert d["url"] == "http://example.com"
assert d["title"] == "Title"
def test_document_to_dict_optional_fields():
chunk = Chunk(content="only chunk", similarity=1.0)
doc = Document(id="doc2", chunks=[chunk])
d = doc.to_dict()
assert d["id"] == "doc2"
assert d["content"] == "only chunk"
assert "url" not in d
assert "title" not in d
def test_resource_model():
resource = Resource(uri="uri1", title="Resource Title")
assert resource.uri == "uri1"
assert resource.title == "Resource Title"
assert resource.description == ""
def test_resource_model_with_description():
resource = Resource(uri="uri2", title="Resource2", description="desc")
assert resource.description == "desc"
def test_retriever_abstract_methods():
class DummyRetriever(Retriever):
def list_resources(self, query=None):
return [Resource(uri="uri", title="title")]
def query_relevant_documents(self, query, resources=[]):
return [Document(id="id", chunks=[])]
retriever = DummyRetriever()
resources = retriever.list_resources()
assert isinstance(resources, list)
assert isinstance(resources[0], Resource)
docs = retriever.query_relevant_documents("query", resources)
assert isinstance(docs, list)
assert isinstance(docs[0], Document)
def test_retriever_cannot_instantiate():
with pytest.raises(TypeError):
Retriever()
Generated
+296 -46
View File
@@ -159,6 +159,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/09/71/54e999902aed72baf26bca0d50781b01838251a462612966e9fc4891eadd/black-25.1.0-py3-none-any.whl", hash = "sha256:95e8176dae143ba9097f351d174fdaf0ccd29efb414b362ae3fd72bf0f710717", size = 207646 },
]
[[package]]
name = "blockbuster"
version = "1.5.24"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "forbiddenfruit", marker = "implementation_name == 'cpython'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/35/c8/1e456a043179f2aef10bcaafea79f6d06c0ac45cc994767a54f680509f3b/blockbuster-1.5.24.tar.gz", hash = "sha256:97645775761a5d425666ec0bc99629b65c7eccdc2f770d2439850682567af4ec", size = 51245 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a7/c8/57a4c80e5abec29fa9406307a5277527f21210bfc6c2c61c3d8ded36c09b/blockbuster-1.5.24-py3-none-any.whl", hash = "sha256:e703497b55bc72af09d60d1cd746c2f3ba7ce0c446fa256be6ccda5e7d403520", size = 13214 },
]
[[package]]
name = "certifi"
version = "2025.1.31"
@@ -248,6 +260,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2", size = 98188 },
]
[[package]]
name = "cloudpickle"
version = "3.1.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/52/39/069100b84d7418bc358d81669d5748efb14b9cceacd2f9c75f550424132f/cloudpickle-3.1.1.tar.gz", hash = "sha256:b216fa8ae4019d5482a8ac3c95d8f6346115d8835911fd4aefd1a445e4242c64", size = 22113 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7e/e8/64c37fadfc2816a7701fa8a6ed8d87327c7d54eacfbfb6edab14a2f2be75/cloudpickle-3.1.1-py3-none-any.whl", hash = "sha256:c8c5a44295039331ee9dad40ba100a9c7297b6f988e50e87ccdf3765a668350e", size = 20992 },
]
[[package]]
name = "colorama"
version = "0.4.6"
@@ -296,6 +317,41 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/fb/b2/f655700e1024dec98b10ebaafd0cedbc25e40e4abe62a3c8e2ceef4f8f0a/coverage-7.6.12-py3-none-any.whl", hash = "sha256:eb8668cfbc279a536c633137deeb9435d2962caec279c3f8cf8b91fff6ff8953", size = 200552 },
]
[[package]]
name = "cryptography"
version = "44.0.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cffi", marker = "platform_python_implementation != 'PyPy'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/53/d6/1411ab4d6108ab167d06254c5be517681f1e331f90edf1379895bcb87020/cryptography-44.0.3.tar.gz", hash = "sha256:fe19d8bc5536a91a24a8133328880a41831b6c5df54599a8417b62fe015d3053", size = 711096 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/08/53/c776d80e9d26441bb3868457909b4e74dd9ccabd182e10b2b0ae7a07e265/cryptography-44.0.3-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:962bc30480a08d133e631e8dfd4783ab71cc9e33d5d7c1e192f0b7c06397bb88", size = 6670281 },
{ url = "https://files.pythonhosted.org/packages/6a/06/af2cf8d56ef87c77319e9086601bef621bedf40f6f59069e1b6d1ec498c5/cryptography-44.0.3-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ffc61e8f3bf5b60346d89cd3d37231019c17a081208dfbbd6e1605ba03fa137", size = 3959305 },
{ url = "https://files.pythonhosted.org/packages/ae/01/80de3bec64627207d030f47bf3536889efee8913cd363e78ca9a09b13c8e/cryptography-44.0.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58968d331425a6f9eedcee087f77fd3c927c88f55368f43ff7e0a19891f2642c", size = 4171040 },
{ url = "https://files.pythonhosted.org/packages/bd/48/bb16b7541d207a19d9ae8b541c70037a05e473ddc72ccb1386524d4f023c/cryptography-44.0.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:e28d62e59a4dbd1d22e747f57d4f00c459af22181f0b2f787ea83f5a876d7c76", size = 3963411 },
{ url = "https://files.pythonhosted.org/packages/42/b2/7d31f2af5591d217d71d37d044ef5412945a8a8e98d5a2a8ae4fd9cd4489/cryptography-44.0.3-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:af653022a0c25ef2e3ffb2c673a50e5a0d02fecc41608f4954176f1933b12359", size = 3689263 },
{ url = "https://files.pythonhosted.org/packages/25/50/c0dfb9d87ae88ccc01aad8eb93e23cfbcea6a6a106a9b63a7b14c1f93c75/cryptography-44.0.3-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:157f1f3b8d941c2bd8f3ffee0af9b049c9665c39d3da9db2dc338feca5e98a43", size = 4196198 },
{ url = "https://files.pythonhosted.org/packages/66/c9/55c6b8794a74da652690c898cb43906310a3e4e4f6ee0b5f8b3b3e70c441/cryptography-44.0.3-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:c6cd67722619e4d55fdb42ead64ed8843d64638e9c07f4011163e46bc512cf01", size = 3966502 },
{ url = "https://files.pythonhosted.org/packages/b6/f7/7cb5488c682ca59a02a32ec5f975074084db4c983f849d47b7b67cc8697a/cryptography-44.0.3-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:b424563394c369a804ecbee9b06dfb34997f19d00b3518e39f83a5642618397d", size = 4196173 },
{ url = "https://files.pythonhosted.org/packages/d2/0b/2f789a8403ae089b0b121f8f54f4a3e5228df756e2146efdf4a09a3d5083/cryptography-44.0.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:c91fc8e8fd78af553f98bc7f2a1d8db977334e4eea302a4bfd75b9461c2d8904", size = 4087713 },
{ url = "https://files.pythonhosted.org/packages/1d/aa/330c13655f1af398fc154089295cf259252f0ba5df93b4bc9d9c7d7f843e/cryptography-44.0.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:25cd194c39fa5a0aa4169125ee27d1172097857b27109a45fadc59653ec06f44", size = 4299064 },
{ url = "https://files.pythonhosted.org/packages/10/a8/8c540a421b44fd267a7d58a1fd5f072a552d72204a3f08194f98889de76d/cryptography-44.0.3-cp37-abi3-win32.whl", hash = "sha256:3be3f649d91cb182c3a6bd336de8b61a0a71965bd13d1a04a0e15b39c3d5809d", size = 2773887 },
{ url = "https://files.pythonhosted.org/packages/b9/0d/c4b1657c39ead18d76bbd122da86bd95bdc4095413460d09544000a17d56/cryptography-44.0.3-cp37-abi3-win_amd64.whl", hash = "sha256:3883076d5c4cc56dbef0b898a74eb6992fdac29a7b9013870b34efe4ddb39a0d", size = 3209737 },
{ url = "https://files.pythonhosted.org/packages/34/a3/ad08e0bcc34ad436013458d7528e83ac29910943cea42ad7dd4141a27bbb/cryptography-44.0.3-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:5639c2b16764c6f76eedf722dbad9a0914960d3489c0cc38694ddf9464f1bb2f", size = 6673501 },
{ url = "https://files.pythonhosted.org/packages/b1/f0/7491d44bba8d28b464a5bc8cc709f25a51e3eac54c0a4444cf2473a57c37/cryptography-44.0.3-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3ffef566ac88f75967d7abd852ed5f182da252d23fac11b4766da3957766759", size = 3960307 },
{ url = "https://files.pythonhosted.org/packages/f7/c8/e5c5d0e1364d3346a5747cdcd7ecbb23ca87e6dea4f942a44e88be349f06/cryptography-44.0.3-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:192ed30fac1728f7587c6f4613c29c584abdc565d7417c13904708db10206645", size = 4170876 },
{ url = "https://files.pythonhosted.org/packages/73/96/025cb26fc351d8c7d3a1c44e20cf9a01e9f7cf740353c9c7a17072e4b264/cryptography-44.0.3-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:7d5fe7195c27c32a64955740b949070f21cba664604291c298518d2e255931d2", size = 3964127 },
{ url = "https://files.pythonhosted.org/packages/01/44/eb6522db7d9f84e8833ba3bf63313f8e257729cf3a8917379473fcfd6601/cryptography-44.0.3-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3f07943aa4d7dad689e3bb1638ddc4944cc5e0921e3c227486daae0e31a05e54", size = 3689164 },
{ url = "https://files.pythonhosted.org/packages/68/fb/d61a4defd0d6cee20b1b8a1ea8f5e25007e26aeb413ca53835f0cae2bcd1/cryptography-44.0.3-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb90f60e03d563ca2445099edf605c16ed1d5b15182d21831f58460c48bffb93", size = 4198081 },
{ url = "https://files.pythonhosted.org/packages/1b/50/457f6911d36432a8811c3ab8bd5a6090e8d18ce655c22820994913dd06ea/cryptography-44.0.3-cp39-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:ab0b005721cc0039e885ac3503825661bd9810b15d4f374e473f8c89b7d5460c", size = 3967716 },
{ url = "https://files.pythonhosted.org/packages/35/6e/dca39d553075980ccb631955c47b93d87d27f3596da8d48b1ae81463d915/cryptography-44.0.3-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:3bb0847e6363c037df8f6ede57d88eaf3410ca2267fb12275370a76f85786a6f", size = 4197398 },
{ url = "https://files.pythonhosted.org/packages/9b/9d/d1f2fe681eabc682067c66a74addd46c887ebacf39038ba01f8860338d3d/cryptography-44.0.3-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:b0cc66c74c797e1db750aaa842ad5b8b78e14805a9b5d1348dc603612d3e3ff5", size = 4087900 },
{ url = "https://files.pythonhosted.org/packages/c4/f5/3599e48c5464580b73b236aafb20973b953cd2e7b44c7c2533de1d888446/cryptography-44.0.3-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6866df152b581f9429020320e5eb9794c8780e90f7ccb021940d7f50ee00ae0b", size = 4301067 },
{ url = "https://files.pythonhosted.org/packages/a7/6c/d2c48c8137eb39d0c193274db5c04a75dab20d2f7c3f81a7dcc3a8897701/cryptography-44.0.3-cp39-abi3-win32.whl", hash = "sha256:c138abae3a12a94c75c10499f1cbae81294a6f983b3af066390adee73f433028", size = 2775467 },
{ url = "https://files.pythonhosted.org/packages/c9/ad/51f212198681ea7b0deaaf8846ee10af99fba4e894f67b353524eab2bbe5/cryptography-44.0.3-cp39-abi3-win_amd64.whl", hash = "sha256:5d186f32e52e66994dce4f766884bcb9c68b8da62d61d9d215bfe5fb56d21334", size = 3210375 },
]
[[package]]
name = "dataclasses-json"
version = "0.6.7"
@@ -342,6 +398,7 @@ dependencies = [
[package.optional-dependencies]
dev = [
{ name = "black" },
{ name = "langgraph-cli", extra = ["inmem"] },
]
test = [
{ name = "pytest" },
@@ -363,6 +420,7 @@ requires-dist = [
{ name = "langchain-mcp-adapters", specifier = ">=0.0.9" },
{ name = "langchain-openai", specifier = ">=0.3.8" },
{ name = "langgraph", specifier = ">=0.3.5" },
{ name = "langgraph-cli", extras = ["inmem"], marker = "extra == 'dev'", specifier = ">=0.2.10" },
{ name = "litellm", specifier = ">=1.63.11" },
{ name = "markdownify", specifier = ">=1.1.0" },
{ name = "mcp", specifier = ">=1.6.0" },
@@ -437,6 +495,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4d/36/2a115987e2d8c300a974597416d9de88f2444426de9571f4b59b2cca3acc/filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de", size = 16215 },
]
[[package]]
name = "forbiddenfruit"
version = "0.1.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e6/79/d4f20e91327c98096d605646bdc6a5ffedae820f38d378d3515c42ec5e60/forbiddenfruit-0.1.4.tar.gz", hash = "sha256:e3f7e66561a29ae129aac139a85d610dbf3dd896128187ed5454b6421f624253", size = 43756 }
[[package]]
name = "frozendict"
version = "2.4.6"
@@ -741,6 +805,28 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/69/4a/4f9dbeb84e8850557c02365a0eee0649abe5eb1d84af92a25731c6c0f922/jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566", size = 88462 },
]
[[package]]
name = "jsonschema-rs"
version = "0.29.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b0/b4/33a9b25cad41d1e533c1ab7ff30eaec50628dd1bcb92171b99a2e944d61f/jsonschema_rs-0.29.1.tar.gz", hash = "sha256:a9f896a9e4517630374f175364705836c22f09d5bd5bbb06ec0611332b6702fd", size = 1406679 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/7b/4a/67ea15558ab85e67d1438b2e5da63b8e89b273c457106cbc87f8f4959a3d/jsonschema_rs-0.29.1-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:9fe7529faa6a84d23e31b1f45853631e4d4d991c85f3d50e6d1df857bb52b72d", size = 3825206 },
{ url = "https://files.pythonhosted.org/packages/b9/2e/bc75ed65d11ba47200ade9795ebd88eb2e64c2852a36d9be640172563430/jsonschema_rs-0.29.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:b5d7e385298f250ed5ce4928fd59fabf2b238f8167f2c73b9414af8143dfd12e", size = 1966302 },
{ url = "https://files.pythonhosted.org/packages/95/dd/4a90e96811f897de066c69d95bc0983138056b19cb169f2a99c736e21933/jsonschema_rs-0.29.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:64a29be0504731a2e3164f66f609b9999aa66a2df3179ecbfc8ead88e0524388", size = 2062846 },
{ url = "https://files.pythonhosted.org/packages/21/91/61834396748a741021716751a786312b8a8319715e6c61421447a07c887c/jsonschema_rs-0.29.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7e91defda5dfa87306543ee9b34d97553d9422c134998c0b64855b381f8b531d", size = 2065564 },
{ url = "https://files.pythonhosted.org/packages/f0/2c/920d92e88b9bdb6cb14867a55e5572e7b78bfc8554f9c625caa516aa13dd/jsonschema_rs-0.29.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96f87680a6a1c16000c851d3578534ae3c154da894026c2a09a50f727bd623d4", size = 2083055 },
{ url = "https://files.pythonhosted.org/packages/6d/0a/f4c1bea3193992fe4ff9ce330c6a594481caece06b1b67d30b15992bbf54/jsonschema_rs-0.29.1-cp312-cp312-win32.whl", hash = "sha256:bcfc0d52ecca6c1b2fbeede65c1ad1545de633045d42ad0c6699039f28b5fb71", size = 1701065 },
{ url = "https://files.pythonhosted.org/packages/5e/89/3f89de071920208c0eb64b827a878d2e587f6a3431b58c02f63c3468b76e/jsonschema_rs-0.29.1-cp312-cp312-win_amd64.whl", hash = "sha256:a414c162d687ee19171e2d8aae821f396d2f84a966fd5c5c757bd47df0954452", size = 1871774 },
{ url = "https://files.pythonhosted.org/packages/1b/9b/d642024e8b39753b789598363fd5998eb3053b52755a5df6a021d53741d5/jsonschema_rs-0.29.1-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:0afee5f31a940dec350a33549ec03f2d1eda2da3049a15cd951a266a57ef97ee", size = 3824864 },
{ url = "https://files.pythonhosted.org/packages/aa/3d/48a7baa2373b941e89a12e720dae123fd0a663c28c4e82213a29c89a4715/jsonschema_rs-0.29.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:c38453a5718bcf2ad1b0163d128814c12829c45f958f9407c69009d8b94a1232", size = 1966084 },
{ url = "https://files.pythonhosted.org/packages/1e/e4/f260917a17bb28bb1dec6fa5e869223341fac2c92053aa9bd23c1caaefa0/jsonschema_rs-0.29.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5dc8bdb1067bf4f6d2f80001a636202dc2cea027b8579f1658ce8e736b06557f", size = 2062430 },
{ url = "https://files.pythonhosted.org/packages/f5/e7/61353403b76768601d802afa5b7b5902d52c33d1dd0f3159aafa47463634/jsonschema_rs-0.29.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4bcfe23992623a540169d0845ea8678209aa2fe7179941dc7c512efc0c2b6b46", size = 2065443 },
{ url = "https://files.pythonhosted.org/packages/40/ed/40b971a09f46a22aa956071ea159413046e9d5fcd280a5910da058acdeb2/jsonschema_rs-0.29.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f2a526c0deacd588864d3400a0997421dffef6fe1df5cfda4513a453c01ad42", size = 2082606 },
{ url = "https://files.pythonhosted.org/packages/bc/59/1c142e1bfb87d57c18fb189149f7aa8edf751725d238d787015278b07600/jsonschema_rs-0.29.1-cp313-cp313-win32.whl", hash = "sha256:68acaefb54f921243552d15cfee3734d222125584243ca438de4444c5654a8a3", size = 1700666 },
{ url = "https://files.pythonhosted.org/packages/13/e8/f0ad941286cd350b879dd2b3c848deecd27f0b3fbc0ff44f2809ad59718d/jsonschema_rs-0.29.1-cp313-cp313-win_amd64.whl", hash = "sha256:1c4e5a61ac760a2fc3856a129cc84aa6f8fba7b9bc07b19fe4101050a8ecc33c", size = 1871619 },
]
[[package]]
name = "jsonschema-specifications"
version = "2024.10.1"
@@ -866,30 +952,81 @@ wheels = [
[[package]]
name = "langgraph"
version = "0.3.5"
version = "0.4.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
{ name = "langchain-core", marker = "python_full_version < '4.0'" },
{ name = "langgraph-checkpoint" },
{ name = "langgraph-prebuilt" },
{ name = "langgraph-sdk" },
{ name = "langgraph-prebuilt", marker = "python_full_version < '4.0'" },
{ name = "langgraph-sdk", marker = "python_full_version < '4.0'" },
{ name = "pydantic" },
{ name = "xxhash" },
]
sdist = { url = "https://files.pythonhosted.org/packages/4e/fa/b1ecc95a2464bc7dbe5e67fbd21096013829119899c33236090b98c75508/langgraph-0.3.5.tar.gz", hash = "sha256:7c0d8e61aa02578b41036c9f7a599ccba2562d269f66ef76bacbba47a99a7eca", size = 114020 }
sdist = { url = "https://files.pythonhosted.org/packages/60/9e/5a64602eff18a99d0216a80eff823051ffbdb7c11b5a16171cee8b1ccce5/langgraph-0.4.3.tar.gz", hash = "sha256:272d5d5903f2c2882dbeeba849846a0f2500bd83fb3734a3801ebe64c1a60bdd", size = 125407 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/a4/5f/1e1d9173b5c41eff54f88d9f4ee82c38eb4928120ab6a21a68a78d1c499e/langgraph-0.3.5-py3-none-any.whl", hash = "sha256:be313ec300633c857873ea3e44aece4dd7d0b11f131d385108b359d377a85bf7", size = 131527 },
{ url = "https://files.pythonhosted.org/packages/35/53/0a20edd9f41eb3707722444ec1b43752b792bbe904d1c8cc3ba27f8eb2c8/langgraph-0.4.3-py3-none-any.whl", hash = "sha256:dec926e034f4d440b92a3c52139cb6e9763bc1791e79a6ea53a233309cec864f", size = 151191 },
]
[[package]]
name = "langgraph-api"
version = "0.2.27"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cloudpickle" },
{ name = "cryptography" },
{ name = "httpx" },
{ name = "jsonschema-rs" },
{ name = "langchain-core", marker = "python_full_version < '4.0'" },
{ name = "langgraph", marker = "python_full_version < '4.0'" },
{ name = "langgraph-checkpoint", marker = "python_full_version < '4.0'" },
{ name = "langgraph-runtime-inmem" },
{ name = "langgraph-sdk", marker = "python_full_version < '4.0'" },
{ name = "langsmith" },
{ name = "orjson" },
{ name = "pyjwt" },
{ name = "sse-starlette" },
{ name = "starlette" },
{ name = "structlog" },
{ name = "tenacity" },
{ name = "truststore" },
{ name = "uvicorn" },
{ name = "watchfiles" },
]
sdist = { url = "https://files.pythonhosted.org/packages/6c/39/796960b1c6d6196f3119081e6072d5a53797003c9695d576550c5590e346/langgraph_api-0.2.27.tar.gz", hash = "sha256:d53c77456de3888164fde8f1703b050c245aebdab3ba42b1868d4bfe319343f5", size = 172523 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8e/0a/d224b694ae1033b90067096cf19ff0628f55053f267cea2c3224cd1e5417/langgraph_api-0.2.27-py3-none-any.whl", hash = "sha256:f2f6ec669e22f2ab6ebaa971573c9b3bdade8d83a968cfa3493de85b154b418b", size = 208097 },
]
[[package]]
name = "langgraph-checkpoint"
version = "2.0.18"
version = "2.0.25"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
{ name = "msgpack" },
{ name = "ormsgpack" },
]
sdist = { url = "https://files.pythonhosted.org/packages/76/1d/27a178de8a40c0cd53671f6a7e9aa21967a17672fdc774e5c0ae6cc406a4/langgraph_checkpoint-2.0.18.tar.gz", hash = "sha256:2822eedd028b454b7bfebfb7e04347aed1b64db97dedb7eb68ef0fb42641606d", size = 34947 }
sdist = { url = "https://files.pythonhosted.org/packages/c5/72/d49828e6929cb3ded1472aa3e5e4a369d292c4f21021ac683d28fbc8f4f8/langgraph_checkpoint-2.0.25.tar.gz", hash = "sha256:77a63cab7b5f84dec1d49db561326ec28bdd48bcefb7fe4ac372069d2609287b", size = 36952 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/21/11/91062b03b22b9ce6474df7c3e056417a4c2b029f9cc71829dd6f62479dd0/langgraph_checkpoint-2.0.18-py3-none-any.whl", hash = "sha256:941de442e5a893a6cabb8c3845f03159301b85f63ff4e8f2b308f7dfd96a3f59", size = 39106 },
{ url = "https://files.pythonhosted.org/packages/12/52/bceb5b5348c7a60ef0625ab0a0a0a9ff5d78f0e12aed8cc55c49d5e8a8c9/langgraph_checkpoint-2.0.25-py3-none-any.whl", hash = "sha256:23416a0f5bc9dd712ac10918fc13e8c9c4530c419d2985a441df71a38fc81602", size = 42312 },
]
[[package]]
name = "langgraph-cli"
version = "0.2.10"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
]
sdist = { url = "https://files.pythonhosted.org/packages/8d/5e/b12bc8140cd4f797ad7f596bf90558994fd6891df8974bc3fc4747eabdc7/langgraph_cli-0.2.10.tar.gz", hash = "sha256:0c215b364daeaf10de681e4960ecaafc7c9cd2a4100b41052d78d95cababf422", size = 31690 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e1/06/7151d7c8d6c2bccc0919ddb35a63caf3707b96c94561f47f14b08d73ef5e/langgraph_cli-0.2.10-py3-none-any.whl", hash = "sha256:4aaa8d828d8d3bf0f55d2b2a36b2d9944021d65a4b06ed708c6d5eea725f65a7", size = 34833 },
]
[package.optional-dependencies]
inmem = [
{ name = "langgraph-api", marker = "python_full_version < '4.0'" },
{ name = "langgraph-runtime-inmem", marker = "python_full_version < '4.0'" },
{ name = "python-dotenv" },
]
[[package]]
@@ -905,17 +1042,34 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/36/72/9e092665502f8f52f2708065ed14fbbba3f95d1a1b65d62049b0c5fcdf00/langgraph_prebuilt-0.1.8-py3-none-any.whl", hash = "sha256:ae97b828ae00be2cefec503423aa782e1bff165e9b94592e224da132f2526968", size = 25903 },
]
[[package]]
name = "langgraph-runtime-inmem"
version = "0.0.11"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "blockbuster" },
{ name = "langgraph", marker = "python_full_version < '4.0'" },
{ name = "langgraph-checkpoint", marker = "python_full_version < '4.0'" },
{ name = "sse-starlette" },
{ name = "starlette" },
{ name = "structlog" },
]
sdist = { url = "https://files.pythonhosted.org/packages/95/6c/f74a7c5a0a4c8998cdce064b6de0692f7d87f4b1776de9854107ee4f89c6/langgraph_runtime_inmem-0.0.11.tar.gz", hash = "sha256:2e4e1802e4721694d46c189e7f1c6e1116ad9366150c9c735a928a834c2b5b30", size = 23633 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e9/a8/1c9250f1b45cc0ef66ca964cf3a78ec271a1f94e20b26adf3887c05b128f/langgraph_runtime_inmem-0.0.11-py3-none-any.whl", hash = "sha256:b0eaf3ea94d13040d75c956a0a54441de2428066aeffebf57241fb954ed2f1bd", size = 27844 },
]
[[package]]
name = "langgraph-sdk"
version = "0.1.55"
version = "0.1.69"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
{ name = "orjson" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7a/6c/8286151a21124dc0189b57495541c2e3cace317056f60feb04076b438f82/langgraph_sdk-0.1.55.tar.gz", hash = "sha256:89a0240157a27822cc4edd1c9e72bc852e20f5c71165a4c9b91eeffa11fd6a6b", size = 42690 }
sdist = { url = "https://files.pythonhosted.org/packages/06/78/4ca0603240332be5fc8ebbb9bc418896310643bef32e3319a311fab37e4c/langgraph_sdk-0.1.69.tar.gz", hash = "sha256:2e85d73b78a03f9606d0fafd62048b3060371149f6f9e61f07f087fd56c766fa", size = 45343 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4e/64/4b75f4b57f0c8f39bdb43aa74b1d2edcdb604b5baa58465ccc54b8b906c5/langgraph_sdk-0.1.55-py3-none-any.whl", hash = "sha256:266e92a558eb738da1ef04c29fbfc2157cd3a977b80905d9509a2cb79331f8fc", size = 45785 },
{ url = "https://files.pythonhosted.org/packages/b0/e6/8e82a0373e233392d83ae37f473c9799c536b307322f0caf49a59bce9522/langgraph_sdk-0.1.69-py3-none-any.whl", hash = "sha256:0ed117bcdf67285a17c57f6265f1d94f2dbd71346cf48a8e1a5fa25e523eb6b8", size = 48905 },
]
[[package]]
@@ -1082,36 +1236,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/10/30/20a7f33b0b884a9d14dd3aa94ff1ac9da1479fe2ad66dd9e2736075d2506/mcp-1.6.0-py3-none-any.whl", hash = "sha256:7bd24c6ea042dbec44c754f100984d186620d8b841ec30f1b19eda9b93a634d0", size = 76077 },
]
[[package]]
name = "msgpack"
version = "1.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/cb/d0/7555686ae7ff5731205df1012ede15dd9d927f6227ea151e901c7406af4f/msgpack-1.1.0.tar.gz", hash = "sha256:dd432ccc2c72b914e4cb77afce64aab761c1137cc698be3984eee260bcb2896e", size = 167260 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e1/d6/716b7ca1dbde63290d2973d22bbef1b5032ca634c3ff4384a958ec3f093a/msgpack-1.1.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:d46cf9e3705ea9485687aa4001a76e44748b609d260af21c4ceea7f2212a501d", size = 152421 },
{ url = "https://files.pythonhosted.org/packages/70/da/5312b067f6773429cec2f8f08b021c06af416bba340c912c2ec778539ed6/msgpack-1.1.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5dbad74103df937e1325cc4bfeaf57713be0b4f15e1c2da43ccdd836393e2ea2", size = 85277 },
{ url = "https://files.pythonhosted.org/packages/28/51/da7f3ae4462e8bb98af0d5bdf2707f1b8c65a0d4f496e46b6afb06cbc286/msgpack-1.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:58dfc47f8b102da61e8949708b3eafc3504509a5728f8b4ddef84bd9e16ad420", size = 82222 },
{ url = "https://files.pythonhosted.org/packages/33/af/dc95c4b2a49cff17ce47611ca9ba218198806cad7796c0b01d1e332c86bb/msgpack-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4676e5be1b472909b2ee6356ff425ebedf5142427842aa06b4dfd5117d1ca8a2", size = 392971 },
{ url = "https://files.pythonhosted.org/packages/f1/54/65af8de681fa8255402c80eda2a501ba467921d5a7a028c9c22a2c2eedb5/msgpack-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:17fb65dd0bec285907f68b15734a993ad3fc94332b5bb21b0435846228de1f39", size = 401403 },
{ url = "https://files.pythonhosted.org/packages/97/8c/e333690777bd33919ab7024269dc3c41c76ef5137b211d776fbb404bfead/msgpack-1.1.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a51abd48c6d8ac89e0cfd4fe177c61481aca2d5e7ba42044fd218cfd8ea9899f", size = 385356 },
{ url = "https://files.pythonhosted.org/packages/57/52/406795ba478dc1c890559dd4e89280fa86506608a28ccf3a72fbf45df9f5/msgpack-1.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2137773500afa5494a61b1208619e3871f75f27b03bcfca7b3a7023284140247", size = 383028 },
{ url = "https://files.pythonhosted.org/packages/e7/69/053b6549bf90a3acadcd8232eae03e2fefc87f066a5b9fbb37e2e608859f/msgpack-1.1.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:398b713459fea610861c8a7b62a6fec1882759f308ae0795b5413ff6a160cf3c", size = 391100 },
{ url = "https://files.pythonhosted.org/packages/23/f0/d4101d4da054f04274995ddc4086c2715d9b93111eb9ed49686c0f7ccc8a/msgpack-1.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:06f5fd2f6bb2a7914922d935d3b8bb4a7fff3a9a91cfce6d06c13bc42bec975b", size = 394254 },
{ url = "https://files.pythonhosted.org/packages/1c/12/cf07458f35d0d775ff3a2dc5559fa2e1fcd06c46f1ef510e594ebefdca01/msgpack-1.1.0-cp312-cp312-win32.whl", hash = "sha256:ad33e8400e4ec17ba782f7b9cf868977d867ed784a1f5f2ab46e7ba53b6e1e1b", size = 69085 },
{ url = "https://files.pythonhosted.org/packages/73/80/2708a4641f7d553a63bc934a3eb7214806b5b39d200133ca7f7afb0a53e8/msgpack-1.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:115a7af8ee9e8cddc10f87636767857e7e3717b7a2e97379dc2054712693e90f", size = 75347 },
{ url = "https://files.pythonhosted.org/packages/c8/b0/380f5f639543a4ac413e969109978feb1f3c66e931068f91ab6ab0f8be00/msgpack-1.1.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:071603e2f0771c45ad9bc65719291c568d4edf120b44eb36324dcb02a13bfddf", size = 151142 },
{ url = "https://files.pythonhosted.org/packages/c8/ee/be57e9702400a6cb2606883d55b05784fada898dfc7fd12608ab1fdb054e/msgpack-1.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0f92a83b84e7c0749e3f12821949d79485971f087604178026085f60ce109330", size = 84523 },
{ url = "https://files.pythonhosted.org/packages/7e/3a/2919f63acca3c119565449681ad08a2f84b2171ddfcff1dba6959db2cceb/msgpack-1.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4a1964df7b81285d00a84da4e70cb1383f2e665e0f1f2a7027e683956d04b734", size = 81556 },
{ url = "https://files.pythonhosted.org/packages/7c/43/a11113d9e5c1498c145a8925768ea2d5fce7cbab15c99cda655aa09947ed/msgpack-1.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59caf6a4ed0d164055ccff8fe31eddc0ebc07cf7326a2aaa0dbf7a4001cd823e", size = 392105 },
{ url = "https://files.pythonhosted.org/packages/2d/7b/2c1d74ca6c94f70a1add74a8393a0138172207dc5de6fc6269483519d048/msgpack-1.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0907e1a7119b337971a689153665764adc34e89175f9a34793307d9def08e6ca", size = 399979 },
{ url = "https://files.pythonhosted.org/packages/82/8c/cf64ae518c7b8efc763ca1f1348a96f0e37150061e777a8ea5430b413a74/msgpack-1.1.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:65553c9b6da8166e819a6aa90ad15288599b340f91d18f60b2061f402b9a4915", size = 383816 },
{ url = "https://files.pythonhosted.org/packages/69/86/a847ef7a0f5ef3fa94ae20f52a4cacf596a4e4a010197fbcc27744eb9a83/msgpack-1.1.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:7a946a8992941fea80ed4beae6bff74ffd7ee129a90b4dd5cf9c476a30e9708d", size = 380973 },
{ url = "https://files.pythonhosted.org/packages/aa/90/c74cf6e1126faa93185d3b830ee97246ecc4fe12cf9d2d31318ee4246994/msgpack-1.1.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:4b51405e36e075193bc051315dbf29168d6141ae2500ba8cd80a522964e31434", size = 387435 },
{ url = "https://files.pythonhosted.org/packages/7a/40/631c238f1f338eb09f4acb0f34ab5862c4e9d7eda11c1b685471a4c5ea37/msgpack-1.1.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b4c01941fd2ff87c2a934ee6055bda4ed353a7846b8d4f341c428109e9fcde8c", size = 399082 },
{ url = "https://files.pythonhosted.org/packages/e9/1b/fa8a952be252a1555ed39f97c06778e3aeb9123aa4cccc0fd2acd0b4e315/msgpack-1.1.0-cp313-cp313-win32.whl", hash = "sha256:7c9a35ce2c2573bada929e0b7b3576de647b0defbd25f5139dcdaba0ae35a4cc", size = 69037 },
{ url = "https://files.pythonhosted.org/packages/b6/bc/8bd826dd03e022153bfa1766dcdec4976d6c818865ed54223d71f07862b3/msgpack-1.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:bce7d9e614a04d0883af0b3d4d501171fbfca038f12c77fa838d9f198147a23f", size = 75140 },
]
[[package]]
name = "multidict"
version = "6.1.0"
@@ -1260,6 +1384,30 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/27/f1/1d7ec15b20f8ce9300bc850de1e059132b88990e46cd0ccac29cbf11e4f9/orjson-3.10.15-cp313-cp313-win_amd64.whl", hash = "sha256:fd56a26a04f6ba5fb2045b0acc487a63162a958ed837648c5781e1fe3316cfbf", size = 133444 },
]
[[package]]
name = "ormsgpack"
version = "1.9.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/25/a7/462cf8ff5e29241868b82d3a5ec124d690eb6a6a5c6fa5bb1367b839e027/ormsgpack-1.9.1.tar.gz", hash = "sha256:3da6e63d82565e590b98178545e64f0f8506137b92bd31a2d04fd7c82baf5794", size = 56887 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/dd/f1/155a598cc8030526ccaaf91ba4d61530f87900645559487edba58b0a90a2/ormsgpack-1.9.1-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:1ede445fc3fdba219bb0e0d1f289df26a9c7602016b7daac6fafe8fe4e91548f", size = 383225 },
{ url = "https://files.pythonhosted.org/packages/23/1c/ef3097ba550fad55c79525f461febdd4e0d9cc18d065248044536f09488e/ormsgpack-1.9.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db50b9f918e25b289114312ed775794d0978b469831b992bdc65bfe20b91fe30", size = 214056 },
{ url = "https://files.pythonhosted.org/packages/27/77/64d0da25896b2cbb99505ca518c109d7dd1964d7fde14c10943731738b60/ormsgpack-1.9.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8c7d8fc58e4333308f58ec720b1ee6b12b2b3fe2d2d8f0766ab751cb351e8757", size = 217339 },
{ url = "https://files.pythonhosted.org/packages/6c/10/c3a7fd0a0068b0bb52cccbfeb5656db895d69e895a3abbc210c4b3f98ff8/ormsgpack-1.9.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aeee6d08c040db265cb8563444aba343ecb32cbdbe2414a489dcead9f70c6765", size = 223816 },
{ url = "https://files.pythonhosted.org/packages/43/e7/aee1238dba652f2116c2523d36fd1c5f9775436032be5c233108fd2a1415/ormsgpack-1.9.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2fbb8181c198bdc413a4e889e5200f010724eea4b6d5a9a7eee2df039ac04aca", size = 394287 },
{ url = "https://files.pythonhosted.org/packages/c7/09/1b452a92376f29d7a2da7c18fb01cf09978197a8eccbb8b204e72fd5a970/ormsgpack-1.9.1-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:16488f094ac0e2250cceea6caf72962614aa432ee11dd57ef45e1ad25ece3eff", size = 480709 },
{ url = "https://files.pythonhosted.org/packages/de/13/7fa9fee5a73af8a73a42bf8c2e69489605714f65f5a41454400a05e84a3b/ormsgpack-1.9.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:422d960bfd6ad88be20794f50ec7953d8f7a0f2df60e19d0e8feb994e2ed64ee", size = 397247 },
{ url = "https://files.pythonhosted.org/packages/a1/2d/2e87cb28110db0d3bb750edd4d8719b5068852a2eef5e96b0bf376bb8a81/ormsgpack-1.9.1-cp312-cp312-win_amd64.whl", hash = "sha256:e6e2f9eab527cf43fb4a4293e493370276b1c8716cf305689202d646c6a782ef", size = 125368 },
{ url = "https://files.pythonhosted.org/packages/b8/54/0390d5d092831e4df29dbafe32402891fc14b3e6ffe5a644b16cbbc9d9bc/ormsgpack-1.9.1-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:ac61c18d9dd085e8519b949f7e655f7fb07909fd09c53b4338dd33309012e289", size = 383226 },
{ url = "https://files.pythonhosted.org/packages/47/64/8b15d262d1caefead8fb22ec144f5ff7d9505fc31c22bc34598053d46fbe/ormsgpack-1.9.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:134840b8c6615da2c24ce77bd12a46098015c808197a9995c7a2d991e1904eec", size = 214057 },
{ url = "https://files.pythonhosted.org/packages/57/00/65823609266bad4d5ed29ea753d24a3bdb01c7edaf923da80967fc31f9c5/ormsgpack-1.9.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:38fd42618f626394b2c7713c5d4bcbc917254e9753d5d4cde460658b51b11a74", size = 217340 },
{ url = "https://files.pythonhosted.org/packages/a0/51/e535c50f7f87b49110233647f55300d7975139ef5e51f1adb4c55f58c124/ormsgpack-1.9.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d36397333ad07b9eba4c2e271fa78951bd81afc059c85a6e9f6c0eb2de07cda", size = 223815 },
{ url = "https://files.pythonhosted.org/packages/0c/ee/393e4a6de2a62124bf589602648f295a9fb3907a0e2fe80061b88899d072/ormsgpack-1.9.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:603063089597917d04e4c1b1d53988a34f7dc2ff1a03adcfd1cf4ae966d5fba6", size = 394287 },
{ url = "https://files.pythonhosted.org/packages/c6/d8/e56d7c3cb73a0e533e3e2a21ae5838b2aa36a9dac1ca9c861af6bae5a369/ormsgpack-1.9.1-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:94bbf2b185e0cb721ceaba20e64b7158e6caf0cecd140ca29b9f05a8d5e91e2f", size = 480707 },
{ url = "https://files.pythonhosted.org/packages/e6/e0/6a3c6a6dc98583a721c54b02f5195bde8f801aebdeda9b601fa2ab30ad39/ormsgpack-1.9.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c38f380b1e8c96a712eb302b9349347385161a8e29046868ae2bfdfcb23e2692", size = 397246 },
{ url = "https://files.pythonhosted.org/packages/b0/60/0ee5d790f13507e1f75ac21fc82dc1ef29afe1f520bd0f249d65b2f4839b/ormsgpack-1.9.1-cp313-cp313-win_amd64.whl", hash = "sha256:a4bc63fb30db94075611cedbbc3d261dd17cf2aa8ff75a0fd684cd45ca29cb1b", size = 125371 },
]
[[package]]
name = "packaging"
version = "24.2"
@@ -1505,6 +1653,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/0b/53/a64f03044927dc47aafe029c42a5b7aabc38dfb813475e0e1bf71c4a59d0/pydantic_settings-2.8.1-py3-none-any.whl", hash = "sha256:81942d5ac3d905f7f3ee1a70df5dfb62d5569c12f51a5a647defc1c3d9ee2e9c", size = 30839 },
]
[[package]]
name = "pyjwt"
version = "2.10.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e7/46/bd74733ff231675599650d3e47f361794b22ef3e3770998dda30d3b63726/pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953", size = 87785 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/61/ad/689f02752eeec26aed679477e80e632ef1b682313be70793d798c1d5fc8f/PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb", size = 22997 },
]
[[package]]
name = "pytest"
version = "8.3.5"
@@ -1803,15 +1960,16 @@ wheels = [
[[package]]
name = "sse-starlette"
version = "2.2.1"
version = "2.1.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "starlette" },
{ name = "uvicorn" },
]
sdist = { url = "https://files.pythonhosted.org/packages/71/a4/80d2a11af59fe75b48230846989e93979c892d3a20016b42bb44edb9e398/sse_starlette-2.2.1.tar.gz", hash = "sha256:54470d5f19274aeed6b2d473430b08b4b379ea851d953b11d7f1c4a2c118b419", size = 17376 }
sdist = { url = "https://files.pythonhosted.org/packages/72/fc/56ab9f116b2133521f532fce8d03194cf04dcac25f583cf3d839be4c0496/sse_starlette-2.1.3.tar.gz", hash = "sha256:9cd27eb35319e1414e3d2558ee7414487f9529ce3b3cf9b21434fd110e017169", size = 19678 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d9/e0/5b8bd393f27f4a62461c5cf2479c75a2cc2ffa330976f9f00f5f6e4f50eb/sse_starlette-2.2.1-py3-none-any.whl", hash = "sha256:6410a3d3ba0c89e7675d4c273a301d64649c03a5ef1ca101f10b47f895fd0e99", size = 10120 },
{ url = "https://files.pythonhosted.org/packages/52/aa/36b271bc4fa1d2796311ee7c7283a3a1c348bad426d37293609ca4300eef/sse_starlette-2.1.3-py3-none-any.whl", hash = "sha256:8ec846438b4665b9e8c560fcdea6bc8081a3abf7942faa95e5a744999d219772", size = 9383 },
]
[[package]]
@@ -1826,6 +1984,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a0/4b/528ccf7a982216885a1ff4908e886b8fb5f19862d1962f56a3fce2435a70/starlette-0.46.1-py3-none-any.whl", hash = "sha256:77c74ed9d2720138b25875133f3a2dae6d854af2ec37dceb56aef370c1d8a227", size = 71995 },
]
[[package]]
name = "structlog"
version = "25.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ff/6a/b0b6d440e429d2267076c4819300d9929563b1da959cf1f68afbcd69fe45/structlog-25.3.0.tar.gz", hash = "sha256:8dab497e6f6ca962abad0c283c46744185e0c9ba900db52a423cb6db99f7abeb", size = 1367514 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f5/52/7a2c7a317b254af857464da3d60a0d3730c44f912f8c510c76a738a207fd/structlog-25.3.0-py3-none-any.whl", hash = "sha256:a341f5524004c158498c3127eecded091eb67d3a611e7a3093deca30db06e172", size = 68240 },
]
[[package]]
name = "tenacity"
version = "9.0.0"
@@ -1896,6 +2063,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540 },
]
[[package]]
name = "truststore"
version = "0.10.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/0f/a7/b7a43228762966a13598a404f3dfb4803ea29a906f449d8b0e73ed0bcd30/truststore-0.10.1.tar.gz", hash = "sha256:eda021616b59021812e800fa0a071e51b266721bef3ce092db8a699e21c63539", size = 26101 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bc/df/8ad635bdcfa8214c399e5614f7c2121dced47defb755a85ea1fa702ffb1c/truststore-0.10.1-py3-none-any.whl", hash = "sha256:b64e6025a409a43ebdd2807b0c41c8bff49ea7ae6550b5087ac6df6619352d4c", size = 18496 },
]
[[package]]
name = "typing-extensions"
version = "4.12.2"
@@ -1949,6 +2125,42 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/61/14/33a3a1352cfa71812a3a21e8c9bfb83f60b0011f5e36f2b1399d51928209/uvicorn-0.34.0-py3-none-any.whl", hash = "sha256:023dc038422502fa28a09c7a30bf2b6991512da7dcdb8fd35fe57cfc154126f4", size = 62315 },
]
[[package]]
name = "watchfiles"
version = "1.0.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
]
sdist = { url = "https://files.pythonhosted.org/packages/03/e2/8ed598c42057de7aa5d97c472254af4906ff0a59a66699d426fc9ef795d7/watchfiles-1.0.5.tar.gz", hash = "sha256:b7529b5dcc114679d43827d8c35a07c493ad6f083633d573d81c660abc5979e9", size = 94537 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2a/8c/4f0b9bdb75a1bfbd9c78fad7d8854369283f74fe7cf03eb16be77054536d/watchfiles-1.0.5-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:b5eb568c2aa6018e26da9e6c86f3ec3fd958cee7f0311b35c2630fa4217d17f2", size = 401511 },
{ url = "https://files.pythonhosted.org/packages/dc/4e/7e15825def77f8bd359b6d3f379f0c9dac4eb09dd4ddd58fd7d14127179c/watchfiles-1.0.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0a04059f4923ce4e856b4b4e5e783a70f49d9663d22a4c3b3298165996d1377f", size = 392715 },
{ url = "https://files.pythonhosted.org/packages/58/65/b72fb817518728e08de5840d5d38571466c1b4a3f724d190cec909ee6f3f/watchfiles-1.0.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e380c89983ce6e6fe2dd1e1921b9952fb4e6da882931abd1824c092ed495dec", size = 454138 },
{ url = "https://files.pythonhosted.org/packages/3e/a4/86833fd2ea2e50ae28989f5950b5c3f91022d67092bfec08f8300d8b347b/watchfiles-1.0.5-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fe43139b2c0fdc4a14d4f8d5b5d967f7a2777fd3d38ecf5b1ec669b0d7e43c21", size = 458592 },
{ url = "https://files.pythonhosted.org/packages/38/7e/42cb8df8be9a37e50dd3a818816501cf7a20d635d76d6bd65aae3dbbff68/watchfiles-1.0.5-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ee0822ce1b8a14fe5a066f93edd20aada932acfe348bede8aa2149f1a4489512", size = 487532 },
{ url = "https://files.pythonhosted.org/packages/fc/fd/13d26721c85d7f3df6169d8b495fcac8ab0dc8f0945ebea8845de4681dab/watchfiles-1.0.5-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a0dbcb1c2d8f2ab6e0a81c6699b236932bd264d4cef1ac475858d16c403de74d", size = 522865 },
{ url = "https://files.pythonhosted.org/packages/a1/0d/7f9ae243c04e96c5455d111e21b09087d0eeaf9a1369e13a01c7d3d82478/watchfiles-1.0.5-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a2014a2b18ad3ca53b1f6c23f8cd94a18ce930c1837bd891262c182640eb40a6", size = 499887 },
{ url = "https://files.pythonhosted.org/packages/8e/0f/a257766998e26aca4b3acf2ae97dff04b57071e991a510857d3799247c67/watchfiles-1.0.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10f6ae86d5cb647bf58f9f655fcf577f713915a5d69057a0371bc257e2553234", size = 454498 },
{ url = "https://files.pythonhosted.org/packages/81/79/8bf142575a03e0af9c3d5f8bcae911ee6683ae93a625d349d4ecf4c8f7df/watchfiles-1.0.5-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:1a7bac2bde1d661fb31f4d4e8e539e178774b76db3c2c17c4bb3e960a5de07a2", size = 630663 },
{ url = "https://files.pythonhosted.org/packages/f1/80/abe2e79f610e45c63a70d271caea90c49bbf93eb00fa947fa9b803a1d51f/watchfiles-1.0.5-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ab626da2fc1ac277bbf752446470b367f84b50295264d2d313e28dc4405d663", size = 625410 },
{ url = "https://files.pythonhosted.org/packages/91/6f/bc7fbecb84a41a9069c2c6eb6319f7f7df113adf113e358c57fc1aff7ff5/watchfiles-1.0.5-cp312-cp312-win32.whl", hash = "sha256:9f4571a783914feda92018ef3901dab8caf5b029325b5fe4558c074582815249", size = 277965 },
{ url = "https://files.pythonhosted.org/packages/99/a5/bf1c297ea6649ec59e935ab311f63d8af5faa8f0b86993e3282b984263e3/watchfiles-1.0.5-cp312-cp312-win_amd64.whl", hash = "sha256:360a398c3a19672cf93527f7e8d8b60d8275119c5d900f2e184d32483117a705", size = 291693 },
{ url = "https://files.pythonhosted.org/packages/7f/7b/fd01087cc21db5c47e5beae507b87965db341cce8a86f9eb12bf5219d4e0/watchfiles-1.0.5-cp312-cp312-win_arm64.whl", hash = "sha256:1a2902ede862969077b97523987c38db28abbe09fb19866e711485d9fbf0d417", size = 283287 },
{ url = "https://files.pythonhosted.org/packages/c7/62/435766874b704f39b2fecd8395a29042db2b5ec4005bd34523415e9bd2e0/watchfiles-1.0.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:0b289572c33a0deae62daa57e44a25b99b783e5f7aed81b314232b3d3c81a11d", size = 401531 },
{ url = "https://files.pythonhosted.org/packages/6e/a6/e52a02c05411b9cb02823e6797ef9bbba0bfaf1bb627da1634d44d8af833/watchfiles-1.0.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a056c2f692d65bf1e99c41045e3bdcaea3cb9e6b5a53dcaf60a5f3bd95fc9763", size = 392417 },
{ url = "https://files.pythonhosted.org/packages/3f/53/c4af6819770455932144e0109d4854437769672d7ad897e76e8e1673435d/watchfiles-1.0.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9dca99744991fc9850d18015c4f0438865414e50069670f5f7eee08340d8b40", size = 453423 },
{ url = "https://files.pythonhosted.org/packages/cb/d1/8e88df58bbbf819b8bc5cfbacd3c79e01b40261cad0fc84d1e1ebd778a07/watchfiles-1.0.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:894342d61d355446d02cd3988a7326af344143eb33a2fd5d38482a92072d9563", size = 458185 },
{ url = "https://files.pythonhosted.org/packages/ff/70/fffaa11962dd5429e47e478a18736d4e42bec42404f5ee3b92ef1b87ad60/watchfiles-1.0.5-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ab44e1580924d1ffd7b3938e02716d5ad190441965138b4aa1d1f31ea0877f04", size = 486696 },
{ url = "https://files.pythonhosted.org/packages/39/db/723c0328e8b3692d53eb273797d9a08be6ffb1d16f1c0ba2bdbdc2a3852c/watchfiles-1.0.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d6f9367b132078b2ceb8d066ff6c93a970a18c3029cea37bfd7b2d3dd2e5db8f", size = 522327 },
{ url = "https://files.pythonhosted.org/packages/cd/05/9fccc43c50c39a76b68343484b9da7b12d42d0859c37c61aec018c967a32/watchfiles-1.0.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f2e55a9b162e06e3f862fb61e399fe9f05d908d019d87bf5b496a04ef18a970a", size = 499741 },
{ url = "https://files.pythonhosted.org/packages/23/14/499e90c37fa518976782b10a18b18db9f55ea73ca14641615056f8194bb3/watchfiles-1.0.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0125f91f70e0732a9f8ee01e49515c35d38ba48db507a50c5bdcad9503af5827", size = 453995 },
{ url = "https://files.pythonhosted.org/packages/61/d9/f75d6840059320df5adecd2c687fbc18960a7f97b55c300d20f207d48aef/watchfiles-1.0.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:13bb21f8ba3248386337c9fa51c528868e6c34a707f729ab041c846d52a0c69a", size = 629693 },
{ url = "https://files.pythonhosted.org/packages/fc/17/180ca383f5061b61406477218c55d66ec118e6c0c51f02d8142895fcf0a9/watchfiles-1.0.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:839ebd0df4a18c5b3c1b890145b5a3f5f64063c2a0d02b13c76d78fe5de34936", size = 624677 },
{ url = "https://files.pythonhosted.org/packages/bf/15/714d6ef307f803f236d69ee9d421763707899d6298d9f3183e55e366d9af/watchfiles-1.0.5-cp313-cp313-win32.whl", hash = "sha256:4a8ec1e4e16e2d5bafc9ba82f7aaecfeec990ca7cd27e84fb6f191804ed2fcfc", size = 277804 },
{ url = "https://files.pythonhosted.org/packages/a8/b4/c57b99518fadf431f3ef47a610839e46e5f8abf9814f969859d1c65c02c7/watchfiles-1.0.5-cp313-cp313-win_amd64.whl", hash = "sha256:f436601594f15bf406518af922a89dcaab416568edb6f65c4e5bbbad1ea45c11", size = 291087 },
]
[[package]]
name = "wcwidth"
version = "0.2.13"
@@ -1967,6 +2179,44 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78", size = 11774 },
]
[[package]]
name = "xxhash"
version = "3.5.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/00/5e/d6e5258d69df8b4ed8c83b6664f2b47d30d2dec551a29ad72a6c69eafd31/xxhash-3.5.0.tar.gz", hash = "sha256:84f2caddf951c9cbf8dc2e22a89d4ccf5d86391ac6418fe81e3c67d0cf60b45f", size = 84241 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/07/0e/1bfce2502c57d7e2e787600b31c83535af83746885aa1a5f153d8c8059d6/xxhash-3.5.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:14470ace8bd3b5d51318782cd94e6f94431974f16cb3b8dc15d52f3b69df8e00", size = 31969 },
{ url = "https://files.pythonhosted.org/packages/3f/d6/8ca450d6fe5b71ce521b4e5db69622383d039e2b253e9b2f24f93265b52c/xxhash-3.5.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:59aa1203de1cb96dbeab595ded0ad0c0056bb2245ae11fac11c0ceea861382b9", size = 30787 },
{ url = "https://files.pythonhosted.org/packages/5b/84/de7c89bc6ef63d750159086a6ada6416cc4349eab23f76ab870407178b93/xxhash-3.5.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:08424f6648526076e28fae6ea2806c0a7d504b9ef05ae61d196d571e5c879c84", size = 220959 },
{ url = "https://files.pythonhosted.org/packages/fe/86/51258d3e8a8545ff26468c977101964c14d56a8a37f5835bc0082426c672/xxhash-3.5.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:61a1ff00674879725b194695e17f23d3248998b843eb5e933007ca743310f793", size = 200006 },
{ url = "https://files.pythonhosted.org/packages/02/0a/96973bd325412feccf23cf3680fd2246aebf4b789122f938d5557c54a6b2/xxhash-3.5.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f2f2c61bee5844d41c3eb015ac652a0229e901074951ae48581d58bfb2ba01be", size = 428326 },
{ url = "https://files.pythonhosted.org/packages/11/a7/81dba5010f7e733de88af9555725146fc133be97ce36533867f4c7e75066/xxhash-3.5.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d32a592cac88d18cc09a89172e1c32d7f2a6e516c3dfde1b9adb90ab5df54a6", size = 194380 },
{ url = "https://files.pythonhosted.org/packages/fb/7d/f29006ab398a173f4501c0e4977ba288f1c621d878ec217b4ff516810c04/xxhash-3.5.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:70dabf941dede727cca579e8c205e61121afc9b28516752fd65724be1355cc90", size = 207934 },
{ url = "https://files.pythonhosted.org/packages/8a/6e/6e88b8f24612510e73d4d70d9b0c7dff62a2e78451b9f0d042a5462c8d03/xxhash-3.5.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e5d0ddaca65ecca9c10dcf01730165fd858533d0be84c75c327487c37a906a27", size = 216301 },
{ url = "https://files.pythonhosted.org/packages/af/51/7862f4fa4b75a25c3b4163c8a873f070532fe5f2d3f9b3fc869c8337a398/xxhash-3.5.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:3e5b5e16c5a480fe5f59f56c30abdeba09ffd75da8d13f6b9b6fd224d0b4d0a2", size = 203351 },
{ url = "https://files.pythonhosted.org/packages/22/61/8d6a40f288f791cf79ed5bb113159abf0c81d6efb86e734334f698eb4c59/xxhash-3.5.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:149b7914451eb154b3dfaa721315117ea1dac2cc55a01bfbd4df7c68c5dd683d", size = 210294 },
{ url = "https://files.pythonhosted.org/packages/17/02/215c4698955762d45a8158117190261b2dbefe9ae7e5b906768c09d8bc74/xxhash-3.5.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:eade977f5c96c677035ff39c56ac74d851b1cca7d607ab3d8f23c6b859379cab", size = 414674 },
{ url = "https://files.pythonhosted.org/packages/31/5c/b7a8db8a3237cff3d535261325d95de509f6a8ae439a5a7a4ffcff478189/xxhash-3.5.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fa9f547bd98f5553d03160967866a71056a60960be00356a15ecc44efb40ba8e", size = 192022 },
{ url = "https://files.pythonhosted.org/packages/78/e3/dd76659b2811b3fd06892a8beb850e1996b63e9235af5a86ea348f053e9e/xxhash-3.5.0-cp312-cp312-win32.whl", hash = "sha256:f7b58d1fd3551b8c80a971199543379be1cee3d0d409e1f6d8b01c1a2eebf1f8", size = 30170 },
{ url = "https://files.pythonhosted.org/packages/d9/6b/1c443fe6cfeb4ad1dcf231cdec96eb94fb43d6498b4469ed8b51f8b59a37/xxhash-3.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:fa0cafd3a2af231b4e113fba24a65d7922af91aeb23774a8b78228e6cd785e3e", size = 30040 },
{ url = "https://files.pythonhosted.org/packages/0f/eb/04405305f290173acc0350eba6d2f1a794b57925df0398861a20fbafa415/xxhash-3.5.0-cp312-cp312-win_arm64.whl", hash = "sha256:586886c7e89cb9828bcd8a5686b12e161368e0064d040e225e72607b43858ba2", size = 26796 },
{ url = "https://files.pythonhosted.org/packages/c9/b8/e4b3ad92d249be5c83fa72916c9091b0965cb0faeff05d9a0a3870ae6bff/xxhash-3.5.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:37889a0d13b0b7d739cfc128b1c902f04e32de17b33d74b637ad42f1c55101f6", size = 31795 },
{ url = "https://files.pythonhosted.org/packages/fc/d8/b3627a0aebfbfa4c12a41e22af3742cf08c8ea84f5cc3367b5de2d039cce/xxhash-3.5.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:97a662338797c660178e682f3bc180277b9569a59abfb5925e8620fba00b9fc5", size = 30792 },
{ url = "https://files.pythonhosted.org/packages/c3/cc/762312960691da989c7cd0545cb120ba2a4148741c6ba458aa723c00a3f8/xxhash-3.5.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7f85e0108d51092bdda90672476c7d909c04ada6923c14ff9d913c4f7dc8a3bc", size = 220950 },
{ url = "https://files.pythonhosted.org/packages/fe/e9/cc266f1042c3c13750e86a535496b58beb12bf8c50a915c336136f6168dc/xxhash-3.5.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cd2fd827b0ba763ac919440042302315c564fdb797294d86e8cdd4578e3bc7f3", size = 199980 },
{ url = "https://files.pythonhosted.org/packages/bf/85/a836cd0dc5cc20376de26b346858d0ac9656f8f730998ca4324921a010b9/xxhash-3.5.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:82085c2abec437abebf457c1d12fccb30cc8b3774a0814872511f0f0562c768c", size = 428324 },
{ url = "https://files.pythonhosted.org/packages/b4/0e/15c243775342ce840b9ba34aceace06a1148fa1630cd8ca269e3223987f5/xxhash-3.5.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:07fda5de378626e502b42b311b049848c2ef38784d0d67b6f30bb5008642f8eb", size = 194370 },
{ url = "https://files.pythonhosted.org/packages/87/a1/b028bb02636dfdc190da01951d0703b3d904301ed0ef6094d948983bef0e/xxhash-3.5.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c279f0d2b34ef15f922b77966640ade58b4ccdfef1c4d94b20f2a364617a493f", size = 207911 },
{ url = "https://files.pythonhosted.org/packages/80/d5/73c73b03fc0ac73dacf069fdf6036c9abad82de0a47549e9912c955ab449/xxhash-3.5.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:89e66ceed67b213dec5a773e2f7a9e8c58f64daeb38c7859d8815d2c89f39ad7", size = 216352 },
{ url = "https://files.pythonhosted.org/packages/b6/2a/5043dba5ddbe35b4fe6ea0a111280ad9c3d4ba477dd0f2d1fe1129bda9d0/xxhash-3.5.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:bcd51708a633410737111e998ceb3b45d3dbc98c0931f743d9bb0a209033a326", size = 203410 },
{ url = "https://files.pythonhosted.org/packages/a2/b2/9a8ded888b7b190aed75b484eb5c853ddd48aa2896e7b59bbfbce442f0a1/xxhash-3.5.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:3ff2c0a34eae7df88c868be53a8dd56fbdf592109e21d4bfa092a27b0bf4a7bf", size = 210322 },
{ url = "https://files.pythonhosted.org/packages/98/62/440083fafbc917bf3e4b67c2ade621920dd905517e85631c10aac955c1d2/xxhash-3.5.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:4e28503dccc7d32e0b9817aa0cbfc1f45f563b2c995b7a66c4c8a0d232e840c7", size = 414725 },
{ url = "https://files.pythonhosted.org/packages/75/db/009206f7076ad60a517e016bb0058381d96a007ce3f79fa91d3010f49cc2/xxhash-3.5.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a6c50017518329ed65a9e4829154626f008916d36295b6a3ba336e2458824c8c", size = 192070 },
{ url = "https://files.pythonhosted.org/packages/1f/6d/c61e0668943a034abc3a569cdc5aeae37d686d9da7e39cf2ed621d533e36/xxhash-3.5.0-cp313-cp313-win32.whl", hash = "sha256:53a068fe70301ec30d868ece566ac90d873e3bb059cf83c32e76012c889b8637", size = 30172 },
{ url = "https://files.pythonhosted.org/packages/96/14/8416dce965f35e3d24722cdf79361ae154fa23e2ab730e5323aa98d7919e/xxhash-3.5.0-cp313-cp313-win_amd64.whl", hash = "sha256:80babcc30e7a1a484eab952d76a4f4673ff601f54d5142c26826502740e70b43", size = 30041 },
{ url = "https://files.pythonhosted.org/packages/27/ee/518b72faa2073f5aa8e3262408d284892cb79cf2754ba0c3a5870645ef73/xxhash-3.5.0-cp313-cp313-win_arm64.whl", hash = "sha256:4811336f1ce11cac89dcbd18f3a25c527c16311709a89313c3acaf771def2d4b", size = 26801 },
]
[[package]]
name = "yarl"
version = "1.18.3"
+8 -2
View File
@@ -6,7 +6,7 @@
"scripts": {
"build": "next build",
"check": "next lint && tsc --noEmit",
"dev": "next dev --turbo",
"dev": "dotenv -e ../.env -- next dev --turbo",
"scan": "next dev & npx react-scan@latest localhost:3000",
"format:check": "prettier --check \"**/*.{ts,tsx,js,jsx,mdx}\" --cache",
"format:write": "prettier --write \"**/*.{ts,tsx,js,jsx,mdx}\" --cache",
@@ -35,12 +35,16 @@
"@radix-ui/react-switch": "^1.2.2",
"@radix-ui/react-tabs": "^1.1.4",
"@radix-ui/react-tooltip": "^1.2.0",
"@rc-component/mentions": "^1.2.0",
"@t3-oss/env-nextjs": "^0.11.0",
"@tailwindcss/typography": "^0.5.16",
"@tiptap/extension-document": "^2.12.0",
"@tiptap/extension-mention": "^2.12.0",
"@tiptap/extension-table": "^2.11.7",
"@tiptap/extension-table-cell": "^2.11.7",
"@tiptap/extension-table-header": "^2.11.7",
"@tiptap/extension-table-row": "^2.11.7",
"@tiptap/extension-text": "^2.12.0",
"@tiptap/react": "^2.11.7",
"@xyflow/react": "^12.6.0",
"best-effort-json-parser": "^1.1.3",
@@ -70,6 +74,7 @@
"remark-math": "^6.0.0",
"sonner": "^2.0.3",
"tailwind-merge": "^3.2.0",
"tippy.js": "^6.3.7",
"tiptap-markdown": "^0.8.10",
"tw-animate-css": "^1.2.5",
"unist-util-visit": "^5.0.0",
@@ -86,6 +91,7 @@
"@types/react": "^19.0.0",
"@types/react-dom": "^19.0.0",
"@types/react-syntax-highlighter": "^15.5.13",
"dotenv-cli": "^8.0.0",
"eslint": "^9.23.0",
"eslint-config-next": "^15.2.3",
"postcss": "^8.5.3",
@@ -105,4 +111,4 @@
"sharp"
]
}
}
}
+228 -8
View File
@@ -62,12 +62,21 @@ importers:
'@radix-ui/react-tooltip':
specifier: ^1.2.0
version: 1.2.0(@types/react-dom@19.1.1(@types/react@19.1.2))(@types/react@19.1.2)(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/mentions':
specifier: ^1.2.0
version: 1.2.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@t3-oss/env-nextjs':
specifier: ^0.11.0
version: 0.11.1(typescript@5.8.3)(zod@3.24.3)
'@tailwindcss/typography':
specifier: ^0.5.16
version: 0.5.16(tailwindcss@4.1.4)
'@tiptap/extension-document':
specifier: ^2.12.0
version: 2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-mention':
specifier: ^2.12.0
version: 2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)(@tiptap/suggestion@2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7))
'@tiptap/extension-table':
specifier: ^2.11.7
version: 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)
@@ -80,6 +89,9 @@ importers:
'@tiptap/extension-table-row':
specifier: ^2.11.7
version: 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-text':
specifier: ^2.12.0
version: 2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/react':
specifier: ^2.11.7
version: 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
@@ -167,6 +179,9 @@ importers:
tailwind-merge:
specifier: ^3.2.0
version: 3.2.0
tippy.js:
specifier: ^6.3.7
version: 6.3.7
tiptap-markdown:
specifier: ^0.8.10
version: 0.8.10(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
@@ -210,6 +225,9 @@ importers:
'@types/react-syntax-highlighter':
specifier: ^15.5.13
version: 15.5.13
dotenv-cli:
specifier: ^8.0.0
version: 8.0.0
eslint:
specifier: ^9.23.0
version: 9.24.0(jiti@2.4.2)
@@ -1193,6 +1211,56 @@ packages:
'@radix-ui/rect@1.1.1':
resolution: {integrity: sha512-HPwpGIzkl28mWyZqG52jiqDJ12waP11Pa1lGoiyUkIEuMLBP0oeK/C89esbXrxsky5we7dfd8U58nm0SgAWpVw==}
'@rc-component/input@1.0.1':
resolution: {integrity: sha512-omxsjWpB+RamzDDB0NzgV6qI7Ok/U6nrN2KLL/hLZJcI7sZZgLYAN+Xs1pN7OYBnUeyn25PizcntEE0nofHv8Q==}
peerDependencies:
react: '>=16.0.0'
react-dom: '>=16.0.0'
'@rc-component/mentions@1.2.0':
resolution: {integrity: sha512-dSr9mX5bQWDegeVLr+NoffjZO5paG/nzM5f+RVslpznfVqR5d3c+xan+f6ZqZWHJqJOfROqNGAkUb8pqqAV7wQ==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
'@rc-component/menu@1.1.3':
resolution: {integrity: sha512-NN/J0nJFwwDfQBycl9mordDTBdSai5Ie4nxaGkH2eHVa37KjyhpU98EtcVb/ss393I7SZTDCvoylS3MQOjgYkw==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
'@rc-component/motion@1.1.4':
resolution: {integrity: sha512-rz3+kqQ05xEgIAB9/UKQZKCg5CO/ivGNU78QWYKVfptmbjJKynZO4KXJ7pJD3oMxE9aW94LD/N3eppXWeysTjw==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
'@rc-component/portal@2.0.0':
resolution: {integrity: sha512-337ADhBfgH02S8OujUl33OT+8zVJ67eyuUq11j/dE71rXKYNihMsggW8R2VfI2aL3SciDp8gAFsmPVoPkxLUGw==}
engines: {node: '>=12.x'}
peerDependencies:
react: '>=18.0.0'
react-dom: '>=18.0.0'
'@rc-component/resize-observer@1.0.0':
resolution: {integrity: sha512-inR8Ka87OOwtrDJzdVp2VuEVlc5nK20lHolvkwFUnXwV50p+nLhKny1NvNTCKvBmS/pi/rTn/1Hvsw10sRRnXA==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
'@rc-component/textarea@1.0.0':
resolution: {integrity: sha512-GuXakeRWZuWUnF2sqfC8RjtzfAh5UI89dPk6r5SgosyQGfQIueuN8LkWmFq5OKTOJIlc82MOjHiPBigKB9+KGw==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
'@rc-component/trigger@3.4.0':
resolution: {integrity: sha512-Vu+RS7bGAHHNtzP6EzrMwH+xiZl+SHQgR98oAUXtoQIy4+4lsSppwQPcl6Q7ORZuZevil1BSw4GHXNWD8BJOXw==}
engines: {node: '>=8.x'}
peerDependencies:
react: '>=18.0.0'
react-dom: '>=18.0.0'
'@rc-component/util@1.2.1':
resolution: {integrity: sha512-AUVu6jO+lWjQnUOOECwu8iR0EdElQgWW5NBv5vP/Uf9dWbAX3udhMutRlkVXjuac2E40ghkFy+ve00mc/3Fymg==}
peerDependencies:
@@ -1399,8 +1467,8 @@ packages:
'@tiptap/core': ^2.7.0
'@tiptap/extension-text-style': ^2.7.0
'@tiptap/extension-document@2.11.7':
resolution: {integrity: sha512-95ouJXPjdAm9+VBRgFo4lhDoMcHovyl/awORDI8gyEn0Rdglt+ZRZYoySFzbVzer9h0cre+QdIwr9AIzFFbfdA==}
'@tiptap/extension-document@2.12.0':
resolution: {integrity: sha512-sA1Q+mxDIv0Y3qQTBkYGwknNbDcGFiJ/fyAFholXpqbrcRx3GavwR/o0chBdsJZlFht0x7AWGwUYWvIo7wYilA==}
peerDependencies:
'@tiptap/core': ^2.7.0
@@ -1470,6 +1538,13 @@ packages:
peerDependencies:
'@tiptap/core': ^2.7.0
'@tiptap/extension-mention@2.12.0':
resolution: {integrity: sha512-+b/fqOU+pRWWAo0ZfyInkhkvV0Ub5RpNrYZ45v2nn5PjbXbxyxNQ51zT6cGk2F6Jmc6UBmlR8iqqNTIQY9ieEg==}
peerDependencies:
'@tiptap/core': ^2.7.0
'@tiptap/pm': ^2.7.0
'@tiptap/suggestion': ^2.7.0
'@tiptap/extension-ordered-list@2.11.7':
resolution: {integrity: sha512-bLGCHDMB0vbJk7uu8bRg8vES3GsvxkX7Cgjgm/6xysHFbK98y0asDtNxkW1VvuRreNGz4tyB6vkcVCfrxl4jKw==}
peerDependencies:
@@ -1528,8 +1603,8 @@ packages:
peerDependencies:
'@tiptap/core': ^2.7.0
'@tiptap/extension-text@2.11.7':
resolution: {integrity: sha512-wObCn8qZkIFnXTLvBP+X8KgaEvTap/FJ/i4hBMfHBCKPGDx99KiJU6VIbDXG8d5ZcFZE0tOetK1pP5oI7qgMlQ==}
'@tiptap/extension-text@2.12.0':
resolution: {integrity: sha512-0ytN9V1tZYTXdiYDQg4FB2SQ56JAJC9r/65snefb9ztl+gZzDrIvih7CflHs1ic9PgyjexfMLeH+VzuMccNyZw==}
peerDependencies:
'@tiptap/core': ^2.7.0
@@ -2221,6 +2296,18 @@ packages:
resolution: {integrity: sha512-35mSku4ZXK0vfCuHEDAwt55dg2jNajHZ1odvF+8SSr82EsZY4QmXfuWso8oEd8zRhVObSN18aM0CjSdoBX7zIw==}
engines: {node: '>=0.10.0'}
dotenv-cli@8.0.0:
resolution: {integrity: sha512-aLqYbK7xKOiTMIRf1lDPbI+Y+Ip/wo5k3eyp6ePysVaSqbyxjyK3dK35BTxG+rmd7djf5q2UPs4noPNH+cj0Qw==}
hasBin: true
dotenv-expand@10.0.0:
resolution: {integrity: sha512-GopVGCpVS1UKH75VKHGuQFqS1Gusej0z4FyQkPdwjil2gNIv+LNsqBlboOzpJFZKVT95GkCyWJbBSdFEFUWI2A==}
engines: {node: '>=12'}
dotenv@16.5.0:
resolution: {integrity: sha512-m/C+AwOAr9/W1UOIZUo232ejMNnJAJtYQjUbHoNTBNTJSvqzzDh7vnrei3o3r3m9blf6ZoDkvcw0VmozNRFJxg==}
engines: {node: '>=12'}
dunder-proto@1.0.1:
resolution: {integrity: sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==}
engines: {node: '>= 0.4'}
@@ -3520,6 +3607,24 @@ packages:
peerDependencies:
webpack: ^4.0.0 || ^5.0.0
rc-overflow@1.4.1:
resolution: {integrity: sha512-3MoPQQPV1uKyOMVNd6SZfONi+f3st0r8PksexIdBTeIYbMX0Jr+k7pHEDvsXtR4BpCv90/Pv2MovVNhktKrwvw==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
rc-resize-observer@1.4.3:
resolution: {integrity: sha512-YZLjUbyIWox8E9i9C3Tm7ia+W7euPItNWSPX5sCcQTYbnwDb5uNpnLHQCG1f22oZWUhLw4Mv2tFmeWe68CDQRQ==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
rc-util@5.44.4:
resolution: {integrity: sha512-resueRJzmHG9Q6rI/DfK6Kdv9/Lfls05vzMs1Sk3M2P+3cJa+MakaZyWY8IPfehVuhPJFKrIY1IK4GqbiaiY5w==}
peerDependencies:
react: '>=16.9.0'
react-dom: '>=16.9.0'
react-css-styled@1.1.9:
resolution: {integrity: sha512-M7fJZ3IWFaIHcZEkoFOnkjdiUFmwd8d+gTh2bpqMOcnxy/0Gsykw4dsL4QBiKsxcGow6tETUa4NAUcmJF+/nfw==}
@@ -3639,6 +3744,9 @@ packages:
resolution: {integrity: sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==}
engines: {node: '>=0.10.0'}
resize-observer-polyfill@1.5.1:
resolution: {integrity: sha512-LwZrotdHOo12nQuZlHEmtuXdqGoOD0OhaxopaNFxWzInpEgaLWoVuAMbTzixuosCx2nEG58ngzW3vxdWoxIgdg==}
resolve-from@4.0.0:
resolution: {integrity: sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==}
engines: {node: '>=4'}
@@ -5047,6 +5155,74 @@ snapshots:
'@radix-ui/rect@1.1.1': {}
'@rc-component/input@1.0.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/mentions@1.2.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/input': 1.0.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/menu': 1.1.3(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/textarea': 1.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/trigger': 3.4.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/menu@1.1.3(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/motion': 1.1.4(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/trigger': 3.4.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
rc-overflow: 1.4.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/motion@1.1.4(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/portal@2.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/resize-observer@1.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/textarea@1.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/input': 1.0.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/resize-observer': 1.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/trigger@3.4.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
'@rc-component/motion': 1.1.4(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/portal': 2.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/resize-observer': 1.0.0(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
'@rc-component/util': 1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
classnames: 2.5.1
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
'@rc-component/util@1.2.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0)':
dependencies:
react: 19.1.0
@@ -5216,7 +5392,7 @@ snapshots:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
'@tiptap/extension-text-style': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-document@2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))':
'@tiptap/extension-document@2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))':
dependencies:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
@@ -5276,6 +5452,12 @@ snapshots:
dependencies:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
'@tiptap/extension-mention@2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)(@tiptap/suggestion@2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7))':
dependencies:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
'@tiptap/pm': 2.11.7
'@tiptap/suggestion': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)
'@tiptap/extension-ordered-list@2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))':
dependencies:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
@@ -5323,7 +5505,7 @@ snapshots:
dependencies:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
'@tiptap/extension-text@2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))':
'@tiptap/extension-text@2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))':
dependencies:
'@tiptap/core': 2.11.7(@tiptap/pm@2.11.7)
@@ -5376,7 +5558,7 @@ snapshots:
'@tiptap/extension-bullet-list': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-code': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-code-block': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)
'@tiptap/extension-document': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-document': 2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-dropcursor': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)
'@tiptap/extension-gapcursor': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))(@tiptap/pm@2.11.7)
'@tiptap/extension-hard-break': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
@@ -5388,7 +5570,7 @@ snapshots:
'@tiptap/extension-ordered-list': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-paragraph': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-strike': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-text': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-text': 2.12.0(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/extension-text-style': 2.11.7(@tiptap/core@2.11.7(@tiptap/pm@2.11.7))
'@tiptap/pm': 2.11.7
@@ -6106,6 +6288,17 @@ snapshots:
dependencies:
esutils: 2.0.3
dotenv-cli@8.0.0:
dependencies:
cross-spawn: 7.0.6
dotenv: 16.5.0
dotenv-expand: 10.0.0
minimist: 1.2.8
dotenv-expand@10.0.0: {}
dotenv@16.5.0: {}
dunder-proto@1.0.1:
dependencies:
call-bind-apply-helpers: 1.0.2
@@ -7816,6 +8009,31 @@ snapshots:
schema-utils: 3.3.0
webpack: 5.99.6
rc-overflow@1.4.1(react-dom@19.1.0(react@19.1.0))(react@19.1.0):
dependencies:
'@babel/runtime': 7.27.0
classnames: 2.5.1
rc-resize-observer: 1.4.3(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
rc-util: 5.44.4(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
rc-resize-observer@1.4.3(react-dom@19.1.0(react@19.1.0))(react@19.1.0):
dependencies:
'@babel/runtime': 7.27.0
classnames: 2.5.1
rc-util: 5.44.4(react-dom@19.1.0(react@19.1.0))(react@19.1.0)
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
resize-observer-polyfill: 1.5.1
rc-util@5.44.4(react-dom@19.1.0(react@19.1.0))(react@19.1.0):
dependencies:
'@babel/runtime': 7.27.0
react: 19.1.0
react-dom: 19.1.0(react@19.1.0)
react-is: 18.3.1
react-css-styled@1.1.9:
dependencies:
css-styled: 1.0.8
@@ -8020,6 +8238,8 @@ snapshots:
require-from-string@2.0.2: {}
resize-observer-polyfill@1.5.1: {}
resolve-from@4.0.0: {}
resolve-pkg-maps@1.0.0: {}
Binary file not shown.
@@ -23,19 +23,19 @@ event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": " reason it's trending, and some key statistics (stars, forks, contributors, etc.).\",\n \"title\": \"Research Plan: Top Trending GitHub Repository Today"}
event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": "\",\n \"steps\": [\n {\n \"need_web_search\": true,\n \"title\": \"Identify and Profile the Top Trending Repository\",\n \"description\": \"Identify the #1 trending repository on"}
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": "\",\n \"steps\": [\n {\n \"need_search\": true,\n \"title\": \"Identify and Profile the Top Trending Repository\",\n \"description\": \"Identify the #1 trending repository on"}
event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": " GitHub today. Collect the following information: repository name, repository owner/organization, a short description of the repository's purpose, the primary programming language used, and the reason GitHub marks it as trending (e.g., 'X new stars today"}
event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": "'). Note: ensure to filter for 'today' to get the current trending repo.\",\n \"step_type\": \"research\"\n },\n {\n \"need_web_search\": true,\n \"title\": \"Gather Repository Statistics and Community Data\",\n \"description\": \"Collect"}
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": "'). Note: ensure to filter for 'today' to get the current trending repo.\",\n \"step_type\": \"research\"\n },\n {\n \"need_search\": true,\n \"title\": \"Gather Repository Statistics and Community Data\",\n \"description\": \"Collect"}
event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": " detailed statistics for the top trending repository. This includes the total number of stars, forks, open issues, closed issues, contributors, and recent commit activity. Also, gather data about the community's involvement, such as the number of active contributors in the last month, and any available information on significant discussions or contributions happening"}
event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": " within the project. Check for recent release notes or announcements.\",\n \"step_type\": \"research\"\n },\n {\n \"need_web_search\": true,\n \"title\": \"Determine Context and Significance\",\n \"description\": \"Research the broader context and significance of the trending"}
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": " within the project. Check for recent release notes or announcements.\",\n \"step_type\": \"research\"\n },\n {\n \"need_search\": true,\n \"title\": \"Determine Context and Significance\",\n \"description\": \"Research the broader context and significance of the trending"}
event: message_chunk
data: {"thread_id": "LmC3xxJCFljoFXggnmvst", "agent": "planner", "id": "run-33af75e6-c1b5-4276-9749-7cfb7a967402", "role": "assistant", "content": " repository. Determine the repository's purpose or function. Investigate the project's background, the problem it solves, or the features it provides. Identify the industry, user base, or application area it serves. Search for recent news, articles, or blog posts mentioning the repository and its impact or potential. Identify its license"}
@@ -20,16 +20,16 @@ event: message_chunk
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": " culinary scene and document its traditional dishes. I will create comprehensive steps to gather the most important data and create a good final report.\",\n \"title\": \"Research"}
event: message_chunk
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": " Plan: Nanjing's Culinary Scene and Traditional Dishes\",\n \"steps\": [\n {\n \"need_web_search\": true,\n "}
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": " Plan: Nanjing's Culinary Scene and Traditional Dishes\",\n \"steps\": [\n {\n \"need_search\": true,\n "}
event: message_chunk
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": "\"title\": \"Identify and Document Key Traditional Nanjing Dishes\",\n \"description\": \"Research and compile a comprehensive list of traditional Nanjing dishes, including their names (in both English and Chinese), detailed descriptions of ingredients and preparation methods, and historical origins"}
event: message_chunk
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": ". Identify dishes that are representative of Nanjing's culinary heritage and those that are less well-known but still significant. Document the specific cooking techniques that characterize Nanjing cuisine.\",\n \"step_type\": \"research\"\n },\n {\n \"need_web_search\": true,\n \"title\": \"Investigate the History and Cultural Significance of Nanjing Cuisine\",\n \"description\": \"Explore the historical influences that have shaped Nanjing's culinary traditions, including its role as a former capital city. Document the cultural significance of specific dishes and"}
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": ". Identify dishes that are representative of Nanjing's culinary heritage and those that are less well-known but still significant. Document the specific cooking techniques that characterize Nanjing cuisine.\",\n \"step_type\": \"research\"\n },\n {\n \"need_search\": true,\n \"title\": \"Investigate the History and Cultural Significance of Nanjing Cuisine\",\n \"description\": \"Explore the historical influences that have shaped Nanjing's culinary traditions, including its role as a former capital city. Document the cultural significance of specific dishes and"}
event: message_chunk
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": " their connection to local customs, festivals, and celebrations. Research the evolution of Nanjing cuisine over time, identifying key periods of change and the factors that contributed to them.\",\n \"step_type\": \"research\"\n },\n {\n \"need_web_search\": true,\n \"title\":"}
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": " their connection to local customs, festivals, and celebrations. Research the evolution of Nanjing cuisine over time, identifying key periods of change and the factors that contributed to them.\",\n \"step_type\": \"research\"\n },\n {\n \"need_search\": true,\n \"title\":"}
event: message_chunk
data: {"thread_id": "PDgExJb-Qsq2fNtO4B_sZ", "agent": "planner", "id": "run-f9561a11-723f-4d5f-917c-95f96601f87f", "role": "assistant", "content": " \"Analyze the Current State of Nanjing's Culinary Scene and Identify Key Restaurants\",\n \"description\": \"Investigate the current state of Nanjing's culinary scene, identifying key restaurants that specialize in traditional Nanjing cuisine. Gather information on their menus, pricing, and customer reviews. Document any trends or changes in the local food"}
File diff suppressed because one or more lines are too long
@@ -83,7 +83,7 @@ event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "\": [\n {\n \""}
event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "need_web_search\":"}
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "need_search\":"}
event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": " true,\n \""}
@@ -134,7 +134,7 @@ event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": " {\n \""}
event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "need_web_search\":"}
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "need_search\":"}
event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": " true,\n \"title"}
@@ -194,7 +194,7 @@ event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "\"\n },\n {\n \""}
event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "need_web_search\":"}
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": "need_search\":"}
event: message_chunk
data: {"thread_id": "5CG_qm7snTVKbpVCrWTon", "agent": "planner", "id": "run-3006007c-5c06-4500-ba23-3fab94c70ae7", "role": "assistant", "content": " true,\n \"title"}
@@ -140,7 +140,7 @@ event: message_chunk
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": "research\"\n },\n {\n"}
event: message_chunk
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": " \"need_web_search\":"}
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": " \"need_search\":"}
event: message_chunk
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": " true,\n \"title"}
@@ -200,7 +200,7 @@ event: message_chunk
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": "research\"\n },\n {\n"}
event: message_chunk
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": " \"need_web_search\":"}
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": " \"need_search\":"}
event: message_chunk
data: {"thread_id": "01uPkjxNhUsYZHQ1DrkhK", "agent": "planner", "id": "run-77b32288-ec82-4b8e-b815-d403687915bd", "role": "assistant", "content": " true,\n \"title"}
+176 -83
View File
@@ -1,20 +1,21 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import { MagicWandIcon } from "@radix-ui/react-icons";
import { AnimatePresence, motion } from "framer-motion";
import { ArrowUp, X } from "lucide-react";
import {
type KeyboardEvent,
useCallback,
useEffect,
useRef,
useState,
} from "react";
import { useCallback, useRef, useState } from "react";
import { Detective } from "~/components/deer-flow/icons/detective";
import MessageInput, {
type MessageInputRef,
} from "~/components/deer-flow/message-input";
import { ReportStyleDialog } from "~/components/deer-flow/report-style-dialog";
import { Tooltip } from "~/components/deer-flow/tooltip";
import { BorderBeam } from "~/components/magicui/border-beam";
import { Button } from "~/components/ui/button";
import type { Option } from "~/core/messages";
import { enhancePrompt } from "~/core/api";
import type { Option, Resource } from "~/core/messages";
import {
setEnableBackgroundInvestigation,
useSettingsStore,
@@ -23,7 +24,6 @@ import { cn } from "~/lib/utils";
export function InputBox({
className,
size,
responding,
feedback,
onSend,
@@ -34,78 +34,101 @@ export function InputBox({
size?: "large" | "normal";
responding?: boolean;
feedback?: { option: Option } | null;
onSend?: (message: string, options?: { interruptFeedback?: string }) => void;
onSend?: (
message: string,
options?: {
interruptFeedback?: string;
resources?: Array<Resource>;
},
) => void;
onCancel?: () => void;
onRemoveFeedback?: () => void;
}) {
const [message, setMessage] = useState("");
const [imeStatus, setImeStatus] = useState<"active" | "inactive">("inactive");
const [indent, setIndent] = useState(0);
const backgroundInvestigation = useSettingsStore(
(state) => state.general.enableBackgroundInvestigation,
);
const textareaRef = useRef<HTMLTextAreaElement>(null);
const reportStyle = useSettingsStore((state) => state.general.reportStyle);
const containerRef = useRef<HTMLDivElement>(null);
const inputRef = useRef<MessageInputRef>(null);
const feedbackRef = useRef<HTMLDivElement>(null);
useEffect(() => {
if (feedback) {
setMessage("");
// Enhancement state
const [isEnhancing, setIsEnhancing] = useState(false);
const [isEnhanceAnimating, setIsEnhanceAnimating] = useState(false);
const [currentPrompt, setCurrentPrompt] = useState("");
setTimeout(() => {
if (feedbackRef.current) {
setIndent(feedbackRef.current.offsetWidth);
}
}, 200);
}
setTimeout(() => {
textareaRef.current?.focus();
}, 0);
}, [feedback]);
const handleSendMessage = useCallback(() => {
if (responding) {
onCancel?.();
} else {
if (message.trim() === "") {
return;
}
if (onSend) {
onSend(message, {
interruptFeedback: feedback?.option.value,
});
setMessage("");
onRemoveFeedback?.();
}
}
}, [responding, onCancel, message, onSend, feedback, onRemoveFeedback]);
const handleKeyDown = useCallback(
(event: KeyboardEvent<HTMLTextAreaElement>) => {
const handleSendMessage = useCallback(
(message: string, resources: Array<Resource>) => {
if (responding) {
return;
}
if (
event.key === "Enter" &&
!event.shiftKey &&
!event.metaKey &&
!event.ctrlKey &&
imeStatus === "inactive"
) {
event.preventDefault();
handleSendMessage();
onCancel?.();
} else {
if (message.trim() === "") {
return;
}
if (onSend) {
onSend(message, {
interruptFeedback: feedback?.option.value,
resources,
});
onRemoveFeedback?.();
// Clear enhancement animation after sending
setIsEnhanceAnimating(false);
}
}
},
[responding, imeStatus, handleSendMessage],
[responding, onCancel, onSend, feedback, onRemoveFeedback],
);
const handleEnhancePrompt = useCallback(async () => {
if (currentPrompt.trim() === "" || isEnhancing) {
return;
}
setIsEnhancing(true);
setIsEnhanceAnimating(true);
try {
const enhancedPrompt = await enhancePrompt({
prompt: currentPrompt,
report_style: reportStyle.toUpperCase(),
});
// Add a small delay for better UX
await new Promise((resolve) => setTimeout(resolve, 500));
// Update the input with the enhanced prompt with animation
if (inputRef.current) {
inputRef.current.setContent(enhancedPrompt);
setCurrentPrompt(enhancedPrompt);
}
// Keep animation for a bit longer to show the effect
setTimeout(() => {
setIsEnhanceAnimating(false);
}, 1000);
} catch (error) {
console.error("Failed to enhance prompt:", error);
setIsEnhanceAnimating(false);
// Could add toast notification here
} finally {
setIsEnhancing(false);
}
}, [currentPrompt, isEnhancing, reportStyle]);
return (
<div className={cn("bg-card relative rounded-[24px] border", className)}>
<div
className={cn(
"bg-card relative flex h-full w-full flex-col rounded-[24px] border",
className,
)}
ref={containerRef}
>
<div className="w-full">
<AnimatePresence>
{feedback && (
<motion.div
ref={feedbackRef}
className="bg-background border-brand absolute top-0 left-0 mt-3 ml-2 flex items-center justify-center gap-1 rounded-2xl border px-2 py-0.5"
className="bg-background border-brand absolute top-0 left-0 mt-2 ml-4 flex items-center justify-center gap-1 rounded-2xl border px-2 py-0.5"
initial={{ opacity: 0, scale: 0 }}
animate={{ opacity: 1, scale: 1 }}
exit={{ opacity: 0, scale: 0 }}
@@ -121,30 +144,65 @@ export function InputBox({
/>
</motion.div>
)}
</AnimatePresence>
<textarea
ref={textareaRef}
className={cn(
"m-0 w-full resize-none border-none px-4 py-3 text-lg",
size === "large" ? "min-h-32" : "min-h-4",
{isEnhanceAnimating && (
<motion.div
className="pointer-events-none absolute inset-0 z-20"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
transition={{ duration: 0.3 }}
>
<div className="relative h-full w-full">
{/* Sparkle effect overlay */}
<motion.div
className="absolute inset-0 rounded-[24px] bg-gradient-to-r from-blue-500/10 via-purple-500/10 to-blue-500/10"
animate={{
background: [
"linear-gradient(45deg, rgba(59, 130, 246, 0.1), rgba(147, 51, 234, 0.1), rgba(59, 130, 246, 0.1))",
"linear-gradient(225deg, rgba(147, 51, 234, 0.1), rgba(59, 130, 246, 0.1), rgba(147, 51, 234, 0.1))",
"linear-gradient(45deg, rgba(59, 130, 246, 0.1), rgba(147, 51, 234, 0.1), rgba(59, 130, 246, 0.1))",
],
}}
transition={{ duration: 2, repeat: Infinity }}
/>
{/* Floating sparkles */}
{[...Array(6)].map((_, i) => (
<motion.div
key={i}
className="absolute h-2 w-2 rounded-full bg-blue-400"
style={{
left: `${20 + i * 12}%`,
top: `${30 + (i % 2) * 40}%`,
}}
animate={{
y: [-10, -20, -10],
opacity: [0, 1, 0],
scale: [0.5, 1, 0.5],
}}
transition={{
duration: 1.5,
repeat: Infinity,
delay: i * 0.2,
}}
/>
))}
</div>
</motion.div>
)}
style={{ textIndent: feedback ? `${indent}px` : 0 }}
placeholder={
feedback
? `Describe how you ${feedback.option.text.toLocaleLowerCase()}?`
: "What can I do for you?"
}
value={message}
onCompositionStart={() => setImeStatus("active")}
onCompositionEnd={() => setImeStatus("inactive")}
onKeyDown={handleKeyDown}
onChange={(event) => {
setMessage(event.target.value);
}}
</AnimatePresence>
<MessageInput
className={cn(
"h-24 px-4 pt-5",
feedback && "pt-9",
isEnhanceAnimating && "transition-all duration-500",
)}
ref={inputRef}
onEnter={handleSendMessage}
onChange={setCurrentPrompt}
/>
</div>
<div className="flex items-center px-4 py-2">
<div className="flex grow">
<div className="flex grow gap-2">
<Tooltip
className="max-w-60"
title={
@@ -166,7 +224,6 @@ export function InputBox({
backgroundInvestigation && "!border-brand !text-brand",
)}
variant="outline"
size="lg"
onClick={() =>
setEnableBackgroundInvestigation(!backgroundInvestigation)
}
@@ -174,14 +231,35 @@ export function InputBox({
<Detective /> Investigation
</Button>
</Tooltip>
<ReportStyleDialog />
</div>
<div className="flex shrink-0 items-center gap-2">
<Tooltip title="Enhance prompt with AI">
<Button
variant="ghost"
size="icon"
className={cn(
"hover:bg-accent h-10 w-10",
isEnhancing && "animate-pulse",
)}
onClick={handleEnhancePrompt}
disabled={isEnhancing || currentPrompt.trim() === ""}
>
{isEnhancing ? (
<div className="flex h-10 w-10 items-center justify-center">
<div className="bg-foreground h-3 w-3 animate-bounce rounded-full opacity-70" />
</div>
) : (
<MagicWandIcon className="text-brand" />
)}
</Button>
</Tooltip>
<Tooltip title={responding ? "Stop" : "Send"}>
<Button
variant="outline"
size="icon"
className={cn("h-10 w-10 rounded-full")}
onClick={handleSendMessage}
onClick={() => inputRef.current?.submit()}
>
{responding ? (
<div className="flex h-10 w-10 items-center justify-center">
@@ -194,6 +272,21 @@ export function InputBox({
</Tooltip>
</div>
</div>
{isEnhancing && (
<>
<BorderBeam
duration={5}
size={250}
className="from-transparent via-red-500 to-transparent"
/>
<BorderBeam
duration={5}
delay={3}
size={250}
className="from-transparent via-blue-500 to-transparent"
/>
</>
)}
</div>
);
}
@@ -173,8 +173,15 @@ function MessageListItem({
)}
>
<MessageBubble message={message}>
<div className="flex w-full flex-col">
<Markdown>{message?.content}</Markdown>
<div className="flex w-full flex-col text-wrap break-words">
<Markdown
className={cn(
message.role === "user" &&
"prose-invert not-dark:text-secondary dark:text-inherit",
)}
>
{message?.content}
</Markdown>
</div>
</MessageBubble>
</div>
@@ -214,9 +221,8 @@ function MessageBubble({
return (
<div
className={cn(
`flex w-fit max-w-[85%] flex-col rounded-2xl px-4 py-3 shadow`,
message.role === "user" &&
"text-primary-foreground bg-brand rounded-ee-none",
`group flex w-fit max-w-[85%] flex-col rounded-2xl px-4 py-3 text-nowrap shadow`,
message.role === "user" && "bg-brand rounded-ee-none",
message.role === "assistant" && "bg-card rounded-es-none",
className,
)}
+29 -4
View File
@@ -15,7 +15,7 @@ import {
} from "~/components/ui/card";
import { fastForwardReplay } from "~/core/api";
import { useReplayMetadata } from "~/core/api/hooks";
import type { Option } from "~/core/messages";
import type { Option, Resource } from "~/core/messages";
import { useReplay } from "~/core/replay";
import { sendMessage, useMessageIds, useStore } from "~/core/store";
import { env } from "~/env";
@@ -36,7 +36,13 @@ export function MessagesBlock({ className }: { className?: string }) {
const abortControllerRef = useRef<AbortController | null>(null);
const [feedback, setFeedback] = useState<{ option: Option } | null>(null);
const handleSend = useCallback(
async (message: string, options?: { interruptFeedback?: string }) => {
async (
message: string,
options?: {
interruptFeedback?: string;
resources?: Array<Resource>;
},
) => {
const abortController = new AbortController();
abortControllerRef.current = abortController;
try {
@@ -45,6 +51,7 @@ export function MessagesBlock({ className }: { className?: string }) {
{
interruptFeedback:
options?.interruptFeedback ?? feedback?.option.value,
resources: options?.resources,
},
{
abortSignal: abortController.signal,
@@ -123,8 +130,26 @@ export function MessagesBlock({ className }: { className?: string }) {
)}
>
<div className="flex items-center justify-between">
<div className="flex-grow">
<CardHeader>
<div className="flex flex-grow items-center">
{responding && (
<motion.div
className="ml-3"
initial={{ opacity: 0, scale: 0.8 }}
animate={{ opacity: 1, scale: 1 }}
exit={{ opacity: 0, scale: 0.8 }}
transition={{ duration: 0.3 }}
>
<video
// Walking deer animation, designed by @liangzhaojun. Thank you for creating it!
src="/images/walking_deer.webm"
autoPlay
loop
muted
className="h-[42px] w-[42px] object-contain"
/>
</motion.div>
)}
<CardHeader className={cn("flex-grow", responding && "pl-3")}>
<CardTitle>
<RainbowText animated={responding}>
{responding ? "Replaying" : `${replayTitle}`}
@@ -4,7 +4,7 @@
import { PythonOutlined } from "@ant-design/icons";
import { motion } from "framer-motion";
import { LRUCache } from "lru-cache";
import { BookOpenText, PencilRuler, Search } from "lucide-react";
import { BookOpenText, FileText, PencilRuler, Search } from "lucide-react";
import { useTheme } from "next-themes";
import { useMemo } from "react";
import SyntaxHighlighter from "react-syntax-highlighter";
@@ -75,7 +75,9 @@ function ActivityMessage({ messageId }: { messageId: string }) {
if (message.agent !== "reporter" && message.agent !== "planner") {
return (
<div className="px-4 py-2">
<Markdown animated>{message.content}</Markdown>
<Markdown animated checkLinkCredibility>
{message.content}
</Markdown>
</div>
);
}
@@ -94,6 +96,8 @@ function ActivityListItem({ messageId }: { messageId: string }) {
return <CrawlToolCall key={toolCall.id} toolCall={toolCall} />;
} else if (toolCall.name === "python_repl_tool") {
return <PythonToolCall key={toolCall.id} toolCall={toolCall} />;
} else if (toolCall.name === "local_search_tool") {
return <RetrieverToolCall key={toolCall.id} toolCall={toolCall} />;
} else {
return <MCPToolCall key={toolCall.id} toolCall={toolCall} />;
}
@@ -116,6 +120,7 @@ type SearchResult =
image_url: string;
image_description: string;
};
function WebSearchToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const searching = useMemo(() => {
return toolCall.result === undefined;
@@ -273,9 +278,67 @@ function CrawlToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
);
}
function RetrieverToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const searching = useMemo(() => {
return toolCall.result === undefined;
}, [toolCall.result]);
const documents = useMemo<
Array<{ id: string; title: string; content: string }>
>(() => {
return toolCall.result ? parseJSON(toolCall.result, []) : [];
}, [toolCall.result]);
return (
<section className="mt-4 pl-4">
<div className="font-medium italic">
<RainbowText className="flex items-center" animated={searching}>
<Search size={16} className={"mr-2"} />
<span>Retrieving documents from RAG&nbsp;</span>
<span className="max-w-[500px] overflow-hidden text-ellipsis whitespace-nowrap">
{(toolCall.args as { keywords: string }).keywords}
</span>
</RainbowText>
</div>
<div className="pr-4">
{documents && (
<ul className="mt-2 flex flex-wrap gap-4">
{searching &&
[...Array(2)].map((_, i) => (
<li
key={`search-result-${i}`}
className="flex h-40 w-40 gap-2 rounded-md text-sm"
>
<Skeleton
className="to-accent h-full w-full rounded-md bg-gradient-to-tl from-slate-400"
style={{ animationDelay: `${i * 0.2}s` }}
/>
</li>
))}
{documents.map((doc, i) => (
<motion.li
key={`search-result-${i}`}
className="text-muted-foreground bg-accent flex max-w-40 gap-2 rounded-md px-2 py-1 text-sm"
initial={{ opacity: 0, y: 10, scale: 0.66 }}
animate={{ opacity: 1, y: 0, scale: 1 }}
transition={{
duration: 0.2,
delay: i * 0.1,
ease: "easeOut",
}}
>
<FileText size={32} />
{doc.title}
</motion.li>
))}
</ul>
)}
</div>
</section>
);
}
function PythonToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const code = useMemo<string>(() => {
return (toolCall.args as { code: string }).code;
const code = useMemo<string | undefined>(() => {
return (toolCall.args as { code?: string }).code;
}, [toolCall.args]);
const { resolvedTheme } = useTheme();
return (
@@ -300,14 +363,62 @@ function PythonToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
boxShadow: "none",
}}
>
{code.trim()}
{code?.trim() ?? ""}
</SyntaxHighlighter>
</div>
</div>
{toolCall.result && <PythonToolCallResult result={toolCall.result} />}
</section>
);
}
function PythonToolCallResult({ result }: { result: string }) {
const { resolvedTheme } = useTheme();
const hasError = useMemo(
() => result.includes("Error executing code:\n"),
[result],
);
const error = useMemo(() => {
if (hasError) {
const parts = result.split("```\nError: ");
if (parts.length > 1) {
return parts[1]!.trim();
}
}
return null;
}, [result, hasError]);
const stdout = useMemo(() => {
if (!hasError) {
const parts = result.split("```\nStdout: ");
if (parts.length > 1) {
return parts[1]!.trim();
}
}
return null;
}, [result, hasError]);
return (
<>
<div className="mt-4 font-medium italic">
{hasError ? "Error when executing the above code" : "Execution output"}
</div>
<div className="bg-accent mt-2 max-h-[400px] max-w-[calc(100%-120px)] overflow-y-auto rounded-md p-2 text-sm">
<SyntaxHighlighter
language="plaintext"
style={resolvedTheme === "dark" ? dark : docco}
customStyle={{
color: hasError ? "red" : "inherit",
background: "transparent",
border: "none",
boxShadow: "none",
}}
>
{error ?? stdout ?? "(empty)"}
</SyntaxHighlighter>
</div>
</>
);
}
function MCPToolCall({ toolCall }: { toolCall: ToolCallRuntime }) {
const tool = useMemo(() => findMCPTool(toolCall.name), [toolCall.name]);
const { resolvedTheme } = useTheme();
+55 -1
View File
@@ -1,7 +1,7 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import { Check, Copy, Headphones, X } from "lucide-react";
import { Check, Copy, Headphones, Pencil, Undo2, X, Download } from "lucide-react";
import { useCallback, useEffect, useState } from "react";
import { ScrollContainer } from "~/components/deer-flow/scroll-container";
@@ -47,6 +47,7 @@ export function ResearchBlock({
await listenToPodcast(researchId);
}, [researchId]);
const [editing, setEditing] = useState(false);
const [copied, setCopied] = useState(false);
const handleCopy = useCallback(() => {
if (!reportId) {
@@ -63,6 +64,37 @@ export function ResearchBlock({
}, 1000);
}, [reportId]);
// Download report as markdown
const handleDownload = useCallback(() => {
if (!reportId) {
return;
}
const report = useStore.getState().messages.get(reportId);
if (!report) {
return;
}
const now = new Date();
const pad = (n: number) => n.toString().padStart(2, '0');
const timestamp = `${now.getFullYear()}-${pad(now.getMonth() + 1)}-${pad(now.getDate())}_${pad(now.getHours())}-${pad(now.getMinutes())}-${pad(now.getSeconds())}`;
const filename = `research-report-${timestamp}.md`;
const blob = new Blob([report.content], { type: 'text/markdown' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = filename;
document.body.appendChild(a);
a.click();
setTimeout(() => {
document.body.removeChild(a);
URL.revokeObjectURL(url);
}, 0);
}, [reportId]);
const handleEdit = useCallback(() => {
setEditing((editing) => !editing);
}, []);
// When the research id changes, set the active tab to activities
useEffect(() => {
if (!hasReport) {
@@ -87,6 +119,17 @@ export function ResearchBlock({
<Headphones />
</Button>
</Tooltip>
<Tooltip title="Edit">
<Button
className="text-gray-400"
size="icon"
variant="ghost"
disabled={isReplay}
onClick={handleEdit}
>
{editing ? <Undo2 /> : <Pencil />}
</Button>
</Tooltip>
<Tooltip title="Copy">
<Button
className="text-gray-400"
@@ -97,6 +140,16 @@ export function ResearchBlock({
{copied ? <Check /> : <Copy />}
</Button>
</Tooltip>
<Tooltip title="Download report as markdown">
<Button
className="text-gray-400"
size="icon"
variant="ghost"
onClick={handleDownload}
>
<Download />
</Button>
</Tooltip>
</>
)}
<Tooltip title="Close">
@@ -147,6 +200,7 @@ export function ResearchBlock({
className="mt-4"
researchId={researchId}
messageId={reportId}
editing={editing}
/>
)}
</ScrollContainer>
@@ -13,10 +13,12 @@ import { cn } from "~/lib/utils";
export function ResearchReportBlock({
className,
messageId,
editing,
}: {
className?: string;
researchId: string;
messageId: string;
editing: boolean;
}) {
const message = useMessage(messageId);
const { isReplay } = useReplay();
@@ -51,18 +53,17 @@ export function ResearchReportBlock({
// }, [isCompleted]);
return (
<div
ref={contentRef}
className={cn("relative flex flex-col pt-4 pb-8", className)}
>
{!isReplay && isCompleted ? (
<div ref={contentRef} className={cn("w-full pt-4 pb-8", className)}>
{!isReplay && isCompleted && editing ? (
<ReportEditor
content={message?.content}
onMarkdownChange={handleMarkdownChange}
/>
) : (
<>
<Markdown animated>{message?.content}</Markdown>
<Markdown animated checkLinkCredibility>
{message?.content}
</Markdown>
{message?.isStreaming && <LoadingAnimation className="my-12" />}
</>
)}
+3 -3
View File
@@ -20,7 +20,7 @@ export default function Main() {
return (
<div
className={cn(
"flex h-full w-full justify-center px-4 pt-12 pb-4",
"flex h-full w-full justify-center-safe px-4 pt-12 pb-4",
doubleColumnMode && "gap-8",
)}
>
@@ -28,13 +28,13 @@ export default function Main() {
className={cn(
"shrink-0 transition-all duration-300 ease-out",
!doubleColumnMode &&
`w-[768px] translate-x-[min(calc((100vw-538px)*0.75/2),960px/2)]`,
`w-[768px] translate-x-[min(max(calc((100vw-538px)*0.75),575px)/2,960px/2)]`,
doubleColumnMode && `w-[538px]`,
)}
/>
<ResearchBlock
className={cn(
"w-[min(calc((100vw-538px)*0.75),960px)] pb-4 transition-all duration-300 ease-out",
"w-[min(max(calc((100vw-538px)*0.75),575px),960px)] pb-4 transition-all duration-300 ease-out",
!doubleColumnMode && "scale-0",
doubleColumnMode && "",
)}
@@ -147,7 +147,7 @@ export function MultiAgentVisualization({ className }: { className?: string }) {
</Tooltip>
<div className="text-muted-foreground ml-2 flex items-center justify-center">
<Slider
className="w-120"
className="w-40 sm:w-80 md:w-100 lg:w-120"
max={playbook.steps.length - 1}
min={0}
step={1}
+30 -1
View File
@@ -25,13 +25,18 @@ import type { Tab } from "./types";
const generalFormSchema = z.object({
autoAcceptedPlan: z.boolean(),
enableBackgroundInvestigation: z.boolean(),
maxPlanIterations: z.number().min(1, {
message: "Max plan iterations must be at least 1.",
}),
maxStepNum: z.number().min(1, {
message: "Max step number must be at least 1.",
}),
maxSearchResults: z.number().min(1, {
message: "Max search results must be at least 1.",
}),
// Others
enableBackgroundInvestigation: z.boolean(),
reportStyle: z.enum(["academic", "popular_science", "news", "social_media"]),
});
export const GeneralTab: Tab = ({
@@ -143,6 +148,30 @@ export const GeneralTab: Tab = ({
</FormItem>
)}
/>
<FormField
control={form.control}
name="maxSearchResults"
render={({ field }) => (
<FormItem>
<FormLabel>Max search results</FormLabel>
<FormControl>
<Input
className="w-60"
type="number"
defaultValue={field.value}
min={1}
onChange={(event) =>
field.onChange(parseInt(event.target.value || "0"))
}
/>
</FormControl>
<FormDescription>
By default, each search step has 3 results.
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
</form>
</Form>
</main>
@@ -0,0 +1,42 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import type { SVGProps } from "react";
export function Enhance(props: SVGProps<SVGSVGElement>) {
return (
<svg
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
xmlns="http://www.w3.org/2000/svg"
{...props}
>
<path
d="M12 2L13.09 8.26L20 9L13.09 9.74L12 16L10.91 9.74L4 9L10.91 8.26L12 2Z"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
strokeLinejoin="round"
fill="none"
/>
<path
d="M19 14L19.5 16.5L22 17L19.5 17.5L19 20L18.5 17.5L16 17L18.5 16.5L19 14Z"
stroke="currentColor"
strokeWidth="1.5"
strokeLinecap="round"
strokeLinejoin="round"
fill="none"
/>
<path
d="M5 6L5.5 7.5L7 8L5.5 8.5L5 10L4.5 8.5L3 8L4.5 7.5L5 6Z"
stroke="currentColor"
strokeWidth="1.5"
strokeLinecap="round"
strokeLinejoin="round"
fill="none"
/>
</svg>
);
}
@@ -0,0 +1,45 @@
export function ReportStyle({ className }: { className?: string }) {
return (
<svg
className={className}
version="1.1"
width="800px"
height="800px"
viewBox="0 0 24 24"
fill="none"
>
<g fill="currentcolor">
<path
d="M4 4C4 3.44772 4.44772 3 5 3H19C19.5523 3 20 3.44772 20 4V20C20 20.5523 19.5523 21 19 21H5C4.44772 21 4 20.5523 4 20V4Z"
stroke="currentColor"
strokeWidth="2"
fill="none"
/>
<path
d="M8 7H16"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
<path
d="M8 11H16"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
<path
d="M8 15H12"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
<circle
cx="16"
cy="15"
r="2"
fill="currentColor"
/>
</g>
</svg>
);
}
+54
View File
@@ -0,0 +1,54 @@
import { useMemo } from "react";
import { useStore, useToolCalls } from "~/core/store";
import { Tooltip } from "./tooltip";
import { WarningFilled } from "@ant-design/icons";
export const Link = ({
href,
children,
checkLinkCredibility = false,
}: {
href: string | undefined;
children: React.ReactNode;
checkLinkCredibility: boolean;
}) => {
const toolCalls = useToolCalls();
const responding = useStore((state) => state.responding);
const credibleLinks = useMemo(() => {
const links = new Set<string>();
if (!checkLinkCredibility) return links;
(toolCalls || []).forEach((call) => {
if (call && call.name === "web_search" && call.result) {
const result = JSON.parse(call.result) as Array<{ url: string }>;
result.forEach((r) => {
links.add(r.url);
});
}
});
return links;
}, [toolCalls]);
const isCredible = useMemo(() => {
return checkLinkCredibility && href && !responding
? credibleLinks.has(href)
: true;
}, [credibleLinks, href, responding, checkLinkCredibility]);
return (
<span className="inline-flex items-center gap-1.5">
<a href={href} target="_blank" rel="noopener noreferrer">
{children}
</a>
{!isCredible && (
<Tooltip
title="This link might be a hallucination from AI model and may not be reliable."
delayDuration={300}
>
<WarningFilled className="text-sx transition-colors hover:!text-yellow-500" />
</Tooltip>
)}
</span>
);
};
+20 -19
View File
@@ -18,6 +18,7 @@ import { cn } from "~/lib/utils";
import Image from "./image";
import { Tooltip } from "./tooltip";
import { Link } from "./link";
export function Markdown({
className,
@@ -25,13 +26,30 @@ export function Markdown({
style,
enableCopy,
animated = false,
checkLinkCredibility = false,
...props
}: ReactMarkdownOptions & {
className?: string;
enableCopy?: boolean;
style?: React.CSSProperties;
animated?: boolean;
checkLinkCredibility?: boolean;
}) {
const components: ReactMarkdownOptions["components"] = useMemo(() => {
return {
a: ({ href, children }) => (
<Link href={href} checkLinkCredibility={checkLinkCredibility}>
{children}
</Link>
),
img: ({ src, alt }) => (
<a href={src as string} target="_blank" rel="noopener noreferrer">
<Image className="rounded" src={src as string} alt={alt ?? ""} />
</a>
),
};
}, [checkLinkCredibility]);
const rehypePlugins = useMemo(() => {
if (animated) {
return [rehypeKatex, rehypeSplitWordsIntoSpans];
@@ -39,28 +57,11 @@ export function Markdown({
return [rehypeKatex];
}, [animated]);
return (
<div
className={cn(
className,
"prose dark:prose-invert prose-p:my-0 prose-img:mt-0 flex flex-col gap-4",
)}
style={style}
>
<div className={cn(className, "prose dark:prose-invert")} style={style}>
<ReactMarkdown
remarkPlugins={[remarkGfm, remarkMath]}
rehypePlugins={rehypePlugins}
components={{
a: ({ href, children }) => (
<a href={href} target="_blank" rel="noopener noreferrer">
{children}
</a>
),
img: ({ src, alt }) => (
<a href={src as string} target="_blank" rel="noopener noreferrer">
<Image className="rounded" src={src as string} alt={alt ?? ""} />
</a>
),
}}
components={components}
{...props}
>
{autoFixMarkdown(
@@ -0,0 +1,219 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
"use client";
import Mention from "@tiptap/extension-mention";
import { Editor, Extension, type Content } from "@tiptap/react";
import {
EditorContent,
type EditorInstance,
EditorRoot,
type JSONContent,
StarterKit,
Placeholder,
} from "novel";
import { Markdown } from "tiptap-markdown";
import { useDebouncedCallback } from "use-debounce";
import "~/styles/prosemirror.css";
import { resourceSuggestion } from "./resource-suggestion";
import React, { forwardRef, useEffect, useMemo, useRef } from "react";
import type { Resource } from "~/core/messages";
import { useRAGProvider } from "~/core/api/hooks";
import { LoadingOutlined } from "@ant-design/icons";
export interface MessageInputRef {
focus: () => void;
submit: () => void;
setContent: (content: string) => void;
}
export interface MessageInputProps {
className?: string;
placeholder?: string;
onChange?: (markdown: string) => void;
onEnter?: (message: string, resources: Array<Resource>) => void;
}
function formatMessage(content: JSONContent) {
if (content.content) {
const output: {
text: string;
resources: Array<Resource>;
} = {
text: "",
resources: [],
};
for (const node of content.content) {
const { text, resources } = formatMessage(node);
output.text += text;
output.resources.push(...resources);
}
return output;
} else {
return formatItem(content);
}
}
function formatItem(item: JSONContent): {
text: string;
resources: Array<Resource>;
} {
if (item.type === "text") {
return { text: item.text ?? "", resources: [] };
}
if (item.type === "mention") {
return {
text: `[${item.attrs?.label}](${item.attrs?.id})`,
resources: [
{ uri: item.attrs?.id ?? "", title: item.attrs?.label ?? "" },
],
};
}
return { text: "", resources: [] };
}
const MessageInput = forwardRef<MessageInputRef, MessageInputProps>(
({ className, onChange, onEnter }: MessageInputProps, ref) => {
const editorRef = useRef<Editor>(null);
const handleEnterRef = useRef<
((message: string, resources: Array<Resource>) => void) | undefined
>(onEnter);
const debouncedUpdates = useDebouncedCallback(
async (editor: EditorInstance) => {
if (onChange) {
// Get the plain text content for prompt enhancement
const { text } = formatMessage(editor.getJSON() ?? []);
onChange(text);
}
},
200,
);
React.useImperativeHandle(ref, () => ({
focus: () => {
editorRef.current?.view.focus();
},
submit: () => {
if (onEnter) {
const { text, resources } = formatMessage(
editorRef.current?.getJSON() ?? [],
);
onEnter(text, resources);
}
editorRef.current?.commands.clearContent();
},
setContent: (content: string) => {
if (editorRef.current) {
editorRef.current.commands.setContent(content);
}
},
}));
useEffect(() => {
handleEnterRef.current = onEnter;
}, [onEnter]);
const { provider, loading } = useRAGProvider();
const extensions = useMemo(() => {
const extensions = [
StarterKit,
Markdown.configure({
html: true,
tightLists: true,
tightListClass: "tight",
bulletListMarker: "-",
linkify: false,
breaks: false,
transformPastedText: false,
transformCopiedText: false,
}),
Placeholder.configure({
showOnlyCurrent: false,
placeholder: provider
? "What can I do for you? \nYou may refer to RAG resources by using @."
: "What can I do for you?",
emptyEditorClass: "placeholder",
}),
Extension.create({
name: "keyboardHandler",
addKeyboardShortcuts() {
return {
Enter: () => {
if (handleEnterRef.current) {
const { text, resources } = formatMessage(
this.editor.getJSON() ?? [],
);
handleEnterRef.current(text, resources);
}
return this.editor.commands.clearContent();
},
};
},
}),
];
if (provider) {
extensions.push(
Mention.configure({
HTMLAttributes: {
class: "mention",
},
suggestion: resourceSuggestion,
}) as Extension,
);
}
return extensions;
}, [provider]);
if (loading) {
return (
<div className={className}>
<LoadingOutlined />
</div>
);
}
return (
<div className={className}>
<EditorRoot>
<EditorContent
immediatelyRender={false}
extensions={extensions}
className="border-muted h-full w-full overflow-auto"
editorProps={{
attributes: {
class:
"prose prose-base dark:prose-invert inline-editor font-default focus:outline-none max-w-full",
},
transformPastedHTML: transformPastedHTML,
}}
onCreate={({ editor }) => {
editorRef.current = editor;
}}
onUpdate={({ editor }) => {
debouncedUpdates(editor);
}}
></EditorContent>
</EditorRoot>
</div>
);
},
);
function transformPastedHTML(html: string) {
try {
// Strip HTML from user-pasted content
const tempEl = document.createElement("div");
tempEl.innerHTML = html;
return tempEl.textContent || tempEl.innerText || "";
} catch (error) {
console.error("Error transforming pasted HTML", error);
return "";
}
}
export default MessageInput;
@@ -0,0 +1,128 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import { useState } from "react";
import { Check, FileText, Newspaper, Users, GraduationCap } from "lucide-react";
import { Button } from "~/components/ui/button";
import {
Dialog,
DialogContent,
DialogDescription,
DialogHeader,
DialogTitle,
DialogTrigger,
} from "~/components/ui/dialog";
import { setReportStyle, useSettingsStore } from "~/core/store";
import { cn } from "~/lib/utils";
import { Tooltip } from "./tooltip";
const REPORT_STYLES = [
{
value: "academic" as const,
label: "Academic",
description: "Formal, objective, and analytical with precise terminology",
icon: GraduationCap,
},
{
value: "popular_science" as const,
label: "Popular Science",
description: "Engaging and accessible for general audience",
icon: FileText,
},
{
value: "news" as const,
label: "News",
description: "Factual, concise, and impartial journalistic style",
icon: Newspaper,
},
{
value: "social_media" as const,
label: "Social Media",
description: "Concise, attention-grabbing, and shareable",
icon: Users,
},
];
export function ReportStyleDialog() {
const [open, setOpen] = useState(false);
const currentStyle = useSettingsStore((state) => state.general.reportStyle);
const handleStyleChange = (
style: "academic" | "popular_science" | "news" | "social_media",
) => {
setReportStyle(style);
setOpen(false);
};
const currentStyleConfig =
REPORT_STYLES.find((style) => style.value === currentStyle) ||
REPORT_STYLES[0]!;
const CurrentIcon = currentStyleConfig.icon;
return (
<Dialog open={open} onOpenChange={setOpen}>
<Tooltip
className="max-w-60"
title={
<div>
<h3 className="mb-2 font-bold">
Writing Style: {currentStyleConfig.label}
</h3>
<p>
Choose the writing style for your research reports. Different
styles are optimized for different audiences and purposes.
</p>
</div>
}
>
<DialogTrigger asChild>
<Button
className="!border-brand !text-brand rounded-2xl"
variant="outline"
>
<CurrentIcon className="h-4 w-4" /> {currentStyleConfig.label}
</Button>
</DialogTrigger>
</Tooltip>
<DialogContent className="sm:max-w-[500px]">
<DialogHeader>
<DialogTitle>Choose Writing Style</DialogTitle>
<DialogDescription>
Select the writing style for your research reports. Each style is
optimized for different audiences and purposes.
</DialogDescription>
</DialogHeader>
<div className="grid gap-3 py-4">
{REPORT_STYLES.map((style) => {
const Icon = style.icon;
const isSelected = currentStyle === style.value;
return (
<button
key={style.value}
className={cn(
"hover:bg-accent flex items-start gap-3 rounded-lg border p-4 text-left transition-colors",
isSelected && "border-primary bg-accent",
)}
onClick={() => handleStyleChange(style.value)}
>
<Icon className="mt-0.5 h-5 w-5 shrink-0" />
<div className="flex-1 space-y-1">
<div className="flex items-center gap-2">
<h4 className="font-medium">{style.label}</h4>
{isSelected && <Check className="text-primary h-4 w-4" />}
</div>
<p className="text-muted-foreground text-sm">
{style.description}
</p>
</div>
</button>
);
})}
</div>
</DialogContent>
</Dialog>
);
}
@@ -0,0 +1,87 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import { forwardRef, useEffect, useImperativeHandle, useState } from "react";
import type { Resource } from "~/core/messages";
import { cn } from "~/lib/utils";
export interface ResourceMentionsProps {
items: Array<Resource>;
command: (item: { id: string; label: string }) => void;
}
export const ResourceMentions = forwardRef<
{ onKeyDown: (args: { event: KeyboardEvent }) => boolean },
ResourceMentionsProps
>((props, ref) => {
const [selectedIndex, setSelectedIndex] = useState(0);
const selectItem = (index: number) => {
const item = props.items[index];
if (item) {
props.command({ id: item.uri, label: item.title });
}
};
const upHandler = () => {
setSelectedIndex(
(selectedIndex + props.items.length - 1) % props.items.length,
);
};
const downHandler = () => {
setSelectedIndex((selectedIndex + 1) % props.items.length);
};
const enterHandler = () => {
selectItem(selectedIndex);
};
useEffect(() => setSelectedIndex(0), [props.items]);
useImperativeHandle(ref, () => ({
onKeyDown: ({ event }) => {
if (event.key === "ArrowUp") {
upHandler();
return true;
}
if (event.key === "ArrowDown") {
downHandler();
return true;
}
if (event.key === "Enter") {
enterHandler();
return true;
}
return false;
},
}));
return (
<div className="bg-card border-var(--border) relative flex flex-col gap-1 overflow-auto rounded-md border p-2 shadow">
{props.items.length ? (
props.items.map((item, index) => (
<button
className={cn(
"focus-visible:ring-ring hover:bg-accent hover:text-accent-foreground inline-flex h-9 w-full items-center justify-start gap-2 rounded-md px-4 py-2 text-sm whitespace-nowrap transition-colors focus-visible:ring-1 focus-visible:outline-none disabled:pointer-events-none disabled:opacity-50 [&_svg]:pointer-events-none [&_svg]:size-4 [&_svg]:shrink-0",
selectedIndex === index &&
"bg-secondary text-secondary-foreground",
)}
key={index}
onClick={() => selectItem(index)}
>
{item.title}
</button>
))
) : (
<div className="items-center justify-center text-gray-500">
No result
</div>
)}
</div>
);
});
@@ -0,0 +1,86 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import type { MentionOptions } from "@tiptap/extension-mention";
import { ReactRenderer } from "@tiptap/react";
import {
ResourceMentions,
type ResourceMentionsProps,
} from "./resource-mentions";
import type { Instance, Props } from "tippy.js";
import tippy from "tippy.js";
import { resolveServiceURL } from "~/core/api/resolve-service-url";
import type { Resource } from "~/core/messages";
export const resourceSuggestion: MentionOptions["suggestion"] = {
items: ({ query }) => {
return fetch(resolveServiceURL(`rag/resources?query=${query}`), {
method: "GET",
})
.then((res) => res.json())
.then((res) => {
return res.resources as Array<Resource>;
})
.catch((err) => {
return [];
});
},
render: () => {
let reactRenderer: ReactRenderer<
{ onKeyDown: (args: { event: KeyboardEvent }) => boolean },
ResourceMentionsProps
>;
let popup: Instance<Props>[] | null = null;
return {
onStart: (props) => {
if (!props.clientRect) {
return;
}
reactRenderer = new ReactRenderer(ResourceMentions, {
props,
editor: props.editor,
});
popup = tippy("body", {
getReferenceClientRect: props.clientRect as any,
appendTo: () => document.body,
content: reactRenderer.element,
showOnCreate: true,
interactive: true,
trigger: "manual",
placement: "top-start",
});
},
onUpdate(props) {
reactRenderer.updateProps(props);
if (!props.clientRect) {
return;
}
popup?.[0]?.setProps({
getReferenceClientRect: props.clientRect as any,
});
},
onKeyDown(props) {
if (props.event.key === "Escape") {
popup?.[0]?.hide();
return true;
}
return reactRenderer.ref?.onKeyDown(props) ?? false;
},
onExit() {
popup?.[0]?.destroy();
reactRenderer.destroy();
},
};
},
};
@@ -1,7 +1,14 @@
// Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
// SPDX-License-Identifier: MIT
import { useEffect, useImperativeHandle, useRef, type ReactNode, type RefObject } from "react";
import {
useEffect,
useImperativeHandle,
useLayoutEffect,
useRef,
type ReactNode,
type RefObject,
} from "react";
import { useStickToBottom } from "use-stick-to-bottom";
import { ScrollArea } from "~/components/ui/scroll-area";
@@ -26,15 +33,16 @@ export function ScrollContainer({
scrollShadow = true,
scrollShadowColor = "var(--background)",
autoScrollToBottom = false,
ref
ref,
}: ScrollContainerProps) {
const { scrollRef, contentRef, scrollToBottom, isAtBottom } = useStickToBottom({ initial: "instant" });
const { scrollRef, contentRef, scrollToBottom, isAtBottom } =
useStickToBottom({ initial: "instant" });
useImperativeHandle(ref, () => ({
scrollToBottom() {
if (isAtBottom) {
scrollToBottom();
}
}
},
}));
const tempScrollRef = useRef<HTMLElement>(null);
+3 -1
View File
@@ -19,6 +19,7 @@ export function Tooltip({
open,
side,
sideOffset,
delayDuration = 750,
}: {
className?: string;
style?: CSSProperties;
@@ -27,10 +28,11 @@ export function Tooltip({
open?: boolean;
side?: "left" | "right" | "top" | "bottom";
sideOffset?: number;
delayDuration?: number;
}) {
return (
<TooltipProvider>
<ShadcnTooltip delayDuration={750} open={open}>
<ShadcnTooltip delayDuration={delayDuration} open={open}>
<TooltipTrigger asChild>{children}</TooltipTrigger>
<TooltipContent
className={cn(className)}
+3 -10
View File
@@ -78,16 +78,12 @@ const taskItem = TaskItem.configure({
});
const horizontalRule = HorizontalRule.configure({
HTMLAttributes: {
class: cx("mt-4 mb-6 border-t border-muted-foreground"),
},
HTMLAttributes: {},
});
const starterKit = StarterKit.configure({
bulletList: {
HTMLAttributes: {
class: cx("list-disc list-outside leading-3 -mt-2"),
},
HTMLAttributes: {},
},
orderedList: {
HTMLAttributes: {
@@ -95,9 +91,7 @@ const starterKit = StarterKit.configure({
},
},
listItem: {
HTMLAttributes: {
class: cx("leading-normal -mb-2"),
},
HTMLAttributes: {},
},
blockquote: {
HTMLAttributes: {
@@ -107,7 +101,6 @@ const starterKit = StarterKit.configure({
codeBlock: false,
code: {
HTMLAttributes: {
class: cx("rounded-md bg-muted px-1.5 py-1 font-mono font-medium"),
spellcheck: "false",
},
},
-17
View File
@@ -66,17 +66,6 @@ const ReportEditor = ({ content, onMarkdownChange }: ReportEditorProps) => {
const debouncedUpdates = useDebouncedCallback(
async (editor: EditorInstance) => {
// const json = editor.getJSON();
// // setCharsCount(editor.storage.characterCount.words());
// window.localStorage.setItem(
// "html-content",
// highlightCodeblocks(editor.getHTML()),
// );
// window.localStorage.setItem("novel-content", JSON.stringify(json));
// window.localStorage.setItem(
// "markdown",
// editor.storage.markdown.getMarkdown(),
// );
if (onMarkdownChange) {
const markdown = editor.storage.markdown.getMarkdown();
onMarkdownChange(markdown);
@@ -86,12 +75,6 @@ const ReportEditor = ({ content, onMarkdownChange }: ReportEditorProps) => {
500,
);
// useEffect(() => {
// const content = window.localStorage.getItem("novel-content");
// if (content) setInitialContent(JSON.parse(content));
// else setInitialContent(defaultEditorContent);
// }, []);
if (!initialContent) return null;
return (

Some files were not shown because too many files have changed in this diff Show More