feat(models): add vLLM provider support (#1860)
support for vLLM 0.19.0 OpenAI-compatible chat endpoints and fixes the Qwen reasoning toggle so flash mode can actually disable thinking. Co-authored-by: NmanQAQ <normangyao@qq.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
This commit is contained in:
@@ -17,6 +17,7 @@ INFOQUEST_API_KEY=your-infoquest-api-key
|
||||
# DEEPSEEK_API_KEY=your-deepseek-api-key
|
||||
# NOVITA_API_KEY=your-novita-api-key # OpenAI-compatible, see https://novita.ai
|
||||
# MINIMAX_API_KEY=your-minimax-api-key # OpenAI-compatible, see https://platform.minimax.io
|
||||
# VLLM_API_KEY=your-vllm-api-key # OpenAI-compatible
|
||||
# FEISHU_APP_ID=your-feishu-app-id
|
||||
# FEISHU_APP_SECRET=your-feishu-app-secret
|
||||
|
||||
|
||||
Reference in New Issue
Block a user