feat(models): add vLLM provider support (#1860)

support for vLLM 0.19.0 OpenAI-compatible chat endpoints and fixes the Qwen reasoning toggle so flash mode can actually disable thinking.

Co-authored-by: NmanQAQ <normangyao@qq.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
This commit is contained in:
NmanQAQ
2026-04-06 15:18:34 +08:00
committed by GitHub
parent 5fd2c581f6
commit dd30e609f7
8 changed files with 534 additions and 5 deletions
+1
View File
@@ -17,6 +17,7 @@ INFOQUEST_API_KEY=your-infoquest-api-key
# DEEPSEEK_API_KEY=your-deepseek-api-key
# NOVITA_API_KEY=your-novita-api-key # OpenAI-compatible, see https://novita.ai
# MINIMAX_API_KEY=your-minimax-api-key # OpenAI-compatible, see https://platform.minimax.io
# VLLM_API_KEY=your-vllm-api-key # OpenAI-compatible
# FEISHU_APP_ID=your-feishu-app-id
# FEISHU_APP_SECRET=your-feishu-app-secret