Merge branch 'main' into staging
This commit is contained in:
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(npm install:*)",
|
||||
"Bash(npm test:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
+243
@@ -0,0 +1,243 @@
|
||||
<p align="center">
|
||||
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a> |
|
||||
<a href="README.zh-CN.md">简体中文</a> |
|
||||
<a href="README.es.md">Español</a> |
|
||||
<a href="README.pt.md">Português</a> |
|
||||
<a href="README.ja.md">日本語</a> |
|
||||
<a href="README.ru.md">Русский</a>
|
||||
</p>
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Goal--Driven-Development-purple?style=flat-square" alt="Goal-Driven" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
<img src="https://img.shields.io/badge/Anthropic-supported-d4a574?style=flat-square" alt="Anthropic" />
|
||||
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
|
||||
<img src="https://img.shields.io/badge/MCP-19_Tools-00ADD8?style=flat-square" alt="MCP" />
|
||||
</p>
|
||||
|
||||
## Descripción General
|
||||
|
||||
Construye agentes de IA confiables y auto-mejorables sin codificar flujos de trabajo. Define tu objetivo a través de una conversación con un agente de codificación, y el framework genera un grafo de nodos con código de conexión creado dinámicamente. Cuando algo falla, el framework captura los datos del error, evoluciona el agente a través del agente de codificación y lo vuelve a desplegar. Los nodos de intervención humana integrados, la gestión de credenciales y el monitoreo en tiempo real te dan control sin sacrificar la adaptabilidad.
|
||||
|
||||
Visita [adenhq.com](https://adenhq.com) para documentación completa, ejemplos y guías.
|
||||
|
||||
## ¿Qué es Aden?
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
|
||||
</p>
|
||||
|
||||
Aden es una plataforma para construir, desplegar, operar y adaptar agentes de IA:
|
||||
|
||||
- **Construir** - Un Agente de Codificación genera Agentes de Trabajo especializados (Ventas, Marketing, Operaciones) a partir de objetivos en lenguaje natural
|
||||
- **Desplegar** - Despliegue headless con integración CI/CD y gestión completa del ciclo de vida de API
|
||||
- **Operar** - Monitoreo en tiempo real, observabilidad y guardarraíles de ejecución mantienen los agentes confiables
|
||||
- **Adaptar** - Evaluación continua, supervisión y adaptación aseguran que los agentes mejoren con el tiempo
|
||||
- **Infraestructura** - Memoria compartida, integraciones LLM, herramientas y habilidades impulsan cada agente
|
||||
|
||||
## Enlaces Rápidos
|
||||
|
||||
- **[Documentación](https://docs.adenhq.com/)** - Guías completas y referencia de API
|
||||
- **[Guía de Auto-Hospedaje](https://docs.adenhq.com/getting-started/quickstart)** - Despliega Hive en tu infraestructura
|
||||
- **[Registro de Cambios](https://github.com/adenhq/hive/releases)** - Últimas actualizaciones y versiones
|
||||
- **[Reportar Problemas](https://github.com/adenhq/hive/issues)** - Reportes de bugs y solicitudes de funciones
|
||||
|
||||
## Inicio Rápido
|
||||
|
||||
### Prerrequisitos
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) (v20.10+)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) (v2.0+)
|
||||
|
||||
### Instalación
|
||||
|
||||
```bash
|
||||
# Clonar el repositorio
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
cd hive
|
||||
|
||||
# Copiar y configurar
|
||||
cp config.yaml.example config.yaml
|
||||
|
||||
# Ejecutar configuración e iniciar servicios
|
||||
npm run setup
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**Acceder a la aplicación:**
|
||||
|
||||
- Panel de Control: http://localhost:3000
|
||||
- API: http://localhost:4000
|
||||
- Salud: http://localhost:4000/health
|
||||
|
||||
## Características
|
||||
|
||||
- **Desarrollo Orientado a Objetivos** - Define objetivos en lenguaje natural; el agente de codificación genera el grafo de agentes y el código de conexión para lograrlos
|
||||
- **Agentes Auto-Adaptables** - El framework captura fallos, actualiza objetivos y actualiza el grafo de agentes
|
||||
- **Conexiones de Nodos Dinámicas** - Sin aristas predefinidas; el código de conexión es generado por cualquier LLM capaz basado en tus objetivos
|
||||
- **Nodos Envueltos en SDK** - Cada nodo obtiene memoria compartida, memoria RLM local, monitoreo, herramientas y acceso LLM de serie
|
||||
- **Humano en el Bucle** - Nodos de intervención que pausan la ejecución para entrada humana con tiempos de espera y escalación configurables
|
||||
- **Observabilidad en Tiempo Real** - Streaming WebSocket para monitoreo en vivo de ejecución de agentes, decisiones y comunicación entre nodos
|
||||
- **Control de Costos y Presupuesto** - Establece límites de gasto, limitadores y políticas de degradación automática de modelos
|
||||
- **Listo para Producción** - Auto-hospedable, construido para escala y confiabilidad
|
||||
|
||||
## Por Qué Aden
|
||||
|
||||
Los frameworks de agentes tradicionales requieren que diseñes manualmente flujos de trabajo, definas interacciones de agentes y manejes fallos de forma reactiva. Aden invierte este paradigma—**describes resultados, y el sistema se construye solo**.
|
||||
|
||||
### La Ventaja de Aden
|
||||
|
||||
| Frameworks Tradicionales | Aden |
|
||||
|--------------------------|------|
|
||||
| Codificar flujos de trabajo de agentes | Describir objetivos en lenguaje natural |
|
||||
| Definición manual de grafos | Grafos de agentes auto-generados |
|
||||
| Manejo reactivo de errores | Auto-evolución proactiva |
|
||||
| Configuraciones de herramientas estáticas | Nodos dinámicos envueltos en SDK |
|
||||
| Configuración de monitoreo separada | Observabilidad en tiempo real integrada |
|
||||
| Gestión de presupuesto DIY | Controles de costos y degradación integrados |
|
||||
|
||||
### Cómo Funciona
|
||||
|
||||
1. **Define Tu Objetivo** → Describe lo que quieres lograr en español simple
|
||||
2. **El Agente de Codificación Genera** → Crea el grafo de agentes, código de conexión y casos de prueba
|
||||
3. **Los Trabajadores Ejecutan** → Los nodos envueltos en SDK se ejecutan con observabilidad completa y acceso a herramientas
|
||||
4. **El Plano de Control Monitorea** → Métricas en tiempo real, aplicación de presupuesto, gestión de políticas
|
||||
5. **Auto-Mejora** → En caso de fallo, el sistema evoluciona el grafo y lo vuelve a desplegar automáticamente
|
||||
|
||||
## Estructura del Proyecto
|
||||
|
||||
```
|
||||
hive/
|
||||
├── honeycomb/ # Frontend (React + TypeScript + Vite)
|
||||
├── hive/ # Backend (Node.js + TypeScript + Express)
|
||||
├── docs/ # Documentación
|
||||
├── scripts/ # Scripts de construcción y utilidades
|
||||
├── config.yaml.example # Plantilla de configuración
|
||||
└── docker-compose.yml # Orquestación de contenedores
|
||||
```
|
||||
|
||||
## Desarrollo
|
||||
|
||||
### Desarrollo Local con Recarga en Caliente
|
||||
|
||||
```bash
|
||||
# Copiar sobrescrituras de desarrollo
|
||||
cp docker-compose.override.yml.example docker-compose.override.yml
|
||||
|
||||
# Iniciar con recarga en caliente habilitada
|
||||
docker compose up
|
||||
```
|
||||
|
||||
### Ejecutar Sin Docker
|
||||
|
||||
```bash
|
||||
# Instalar dependencias
|
||||
npm install
|
||||
|
||||
# Generar archivos de entorno
|
||||
npm run generate:env
|
||||
|
||||
# Iniciar frontend (en honeycomb/)
|
||||
cd honeycomb && npm run dev
|
||||
|
||||
# Iniciar backend (en hive/)
|
||||
cd hive && npm run dev
|
||||
```
|
||||
|
||||
## Documentación
|
||||
|
||||
- **[Guía del Desarrollador](DEVELOPER.md)** - Guía completa para desarrolladores
|
||||
- [Primeros Pasos](docs/getting-started.md) - Instrucciones de configuración rápida
|
||||
- [Guía de Configuración](docs/configuration.md) - Todas las opciones de configuración
|
||||
- [Visión General de Arquitectura](docs/architecture.md) - Diseño y estructura del sistema
|
||||
|
||||
## Hoja de Ruta
|
||||
|
||||
El Framework de Agentes Aden tiene como objetivo ayudar a los desarrolladores a construir agentes auto-adaptativos orientados a resultados. Encuentra nuestra hoja de ruta aquí:
|
||||
|
||||
[ROADMAP.md](ROADMAP.md)
|
||||
|
||||
## Comunidad y Soporte
|
||||
|
||||
Usamos [Discord](https://discord.com/invite/MXE49hrKDk) para soporte, solicitudes de funciones y discusiones de la comunidad.
|
||||
|
||||
- Discord - [Únete a nuestra comunidad](https://discord.com/invite/MXE49hrKDk)
|
||||
- Twitter/X - [@adenhq](https://x.com/aden_hq)
|
||||
- LinkedIn - [Página de la Empresa](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
## Contribuir
|
||||
|
||||
¡Damos la bienvenida a las contribuciones! Por favor consulta [CONTRIBUTING.md](CONTRIBUTING.md) para las directrices.
|
||||
|
||||
1. Haz fork del repositorio
|
||||
2. Crea tu rama de funcionalidad (`git checkout -b feature/amazing-feature`)
|
||||
3. Haz commit de tus cambios (`git commit -m 'Add amazing feature'`)
|
||||
4. Haz push a la rama (`git push origin feature/amazing-feature`)
|
||||
5. Abre un Pull Request
|
||||
|
||||
## Únete a Nuestro Equipo
|
||||
|
||||
**¡Estamos contratando!** Únete a nosotros en roles de ingeniería, investigación y comercialización.
|
||||
|
||||
[Ver Posiciones Abiertas](https://jobs.adenhq.com/a8cec478-cdbc-473c-bbd4-f4b7027ec193/applicant)
|
||||
|
||||
## Seguridad
|
||||
|
||||
Para preocupaciones de seguridad, por favor consulta [SECURITY.md](SECURITY.md).
|
||||
|
||||
## Licencia
|
||||
|
||||
Este proyecto está licenciado bajo la Licencia Apache 2.0 - consulta el archivo [LICENSE](LICENSE) para más detalles.
|
||||
|
||||
## Preguntas Frecuentes (FAQ)
|
||||
|
||||
**P: ¿Aden depende de LangChain u otros frameworks de agentes?**
|
||||
|
||||
No. Aden está construido desde cero sin dependencias de LangChain, CrewAI u otros frameworks de agentes. El framework está diseñado para ser ligero y flexible, generando grafos de agentes dinámicamente en lugar de depender de componentes predefinidos.
|
||||
|
||||
**P: ¿Qué proveedores de LLM soporta Aden?**
|
||||
|
||||
Aden soporta OpenAI (GPT-4, GPT-4o), Anthropic (modelos Claude) y Google Gemini de serie. La arquitectura es agnóstica al proveedor a través de la abstracción del SDK, con integración de LiteLLM en la hoja de ruta para soporte expandido de modelos.
|
||||
|
||||
**P: ¿Aden es de código abierto?**
|
||||
|
||||
Sí, Aden es completamente de código abierto bajo la Licencia Apache 2.0. Fomentamos activamente las contribuciones y colaboración de la comunidad.
|
||||
|
||||
**P: ¿Qué opciones de despliegue soporta Aden?**
|
||||
|
||||
Aden soporta despliegue con Docker Compose de serie, con configuraciones tanto de producción como de desarrollo. Los despliegues auto-hospedados funcionan en cualquier infraestructura que soporte Docker. Las opciones de despliegue en la nube y configuraciones listas para Kubernetes están en la hoja de ruta.
|
||||
|
||||
**P: ¿Puede Aden manejar casos de uso complejos a escala de producción?**
|
||||
|
||||
Sí. Aden está explícitamente diseñado para entornos de producción con características como recuperación automática de fallos, observabilidad en tiempo real, controles de costos y soporte de escalado horizontal. El framework maneja tanto automatizaciones simples como flujos de trabajo complejos multi-agente.
|
||||
|
||||
**P: ¿Aden soporta flujos de trabajo con humano en el bucle?**
|
||||
|
||||
Sí, Aden soporta completamente flujos de trabajo con humano en el bucle a través de nodos de intervención que pausan la ejecución para entrada humana. Estos incluyen tiempos de espera configurables y políticas de escalación, permitiendo colaboración fluida entre expertos humanos y agentes de IA.
|
||||
|
||||
**P: ¿Cómo puedo contribuir a Aden?**
|
||||
|
||||
¡Las contribuciones son bienvenidas! Haz fork del repositorio, crea tu rama de funcionalidad, implementa tus cambios y envía un pull request. Consulta [CONTRIBUTING.md](CONTRIBUTING.md) para directrices detalladas.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
Hecho con 🔥 Pasión en San Francisco
|
||||
</p>
|
||||
+243
@@ -0,0 +1,243 @@
|
||||
<p align="center">
|
||||
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a> |
|
||||
<a href="README.zh-CN.md">简体中文</a> |
|
||||
<a href="README.es.md">Español</a> |
|
||||
<a href="README.pt.md">Português</a> |
|
||||
<a href="README.ja.md">日本語</a> |
|
||||
<a href="README.ru.md">Русский</a>
|
||||
</p>
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Goal--Driven-Development-purple?style=flat-square" alt="Goal-Driven" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
<img src="https://img.shields.io/badge/Anthropic-supported-d4a574?style=flat-square" alt="Anthropic" />
|
||||
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
|
||||
<img src="https://img.shields.io/badge/MCP-19_Tools-00ADD8?style=flat-square" alt="MCP" />
|
||||
</p>
|
||||
|
||||
## 概要
|
||||
|
||||
ワークフローをハードコーディングせずに、信頼性の高い自己改善型AIエージェントを構築できます。コーディングエージェントとの会話を通じて目標を定義すると、フレームワークが動的に作成された接続コードを持つノードグラフを生成します。問題が発生すると、フレームワークは障害データをキャプチャし、コーディングエージェントを通じてエージェントを進化させ、再デプロイします。組み込みのヒューマンインザループノード、認証情報管理、リアルタイムモニタリングにより、適応性を損なうことなく制御を維持できます。
|
||||
|
||||
完全なドキュメント、例、ガイドについては [adenhq.com](https://adenhq.com) をご覧ください。
|
||||
|
||||
## Adenとは
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
|
||||
</p>
|
||||
|
||||
Adenは、AIエージェントの構築、デプロイ、運用、適応のためのプラットフォームです:
|
||||
|
||||
- **構築** - コーディングエージェントが自然言語の目標から専門的なワーカーエージェント(セールス、マーケティング、オペレーション)を生成
|
||||
- **デプロイ** - CI/CD統合と完全なAPIライフサイクル管理を備えたヘッドレスデプロイメント
|
||||
- **運用** - リアルタイムモニタリング、可観測性、ランタイムガードレールがエージェントの信頼性を維持
|
||||
- **適応** - 継続的な評価、監督、適応により、エージェントは時間とともに改善
|
||||
- **インフラ** - 共有メモリ、LLM統合、ツール、スキルがすべてのエージェントを支援
|
||||
|
||||
## クイックリンク
|
||||
|
||||
- **[ドキュメント](https://docs.adenhq.com/)** - 完全なガイドとAPIリファレンス
|
||||
- **[セルフホスティングガイド](https://docs.adenhq.com/getting-started/quickstart)** - インフラストラクチャへのHiveデプロイ
|
||||
- **[変更履歴](https://github.com/adenhq/hive/releases)** - 最新の更新とリリース
|
||||
- **[問題を報告](https://github.com/adenhq/hive/issues)** - バグレポートと機能リクエスト
|
||||
|
||||
## クイックスタート
|
||||
|
||||
### 前提条件
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) (v20.10+)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) (v2.0+)
|
||||
|
||||
### インストール
|
||||
|
||||
```bash
|
||||
# リポジトリをクローン
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
cd hive
|
||||
|
||||
# コピーして設定
|
||||
cp config.yaml.example config.yaml
|
||||
|
||||
# セットアップを実行してサービスを開始
|
||||
npm run setup
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**アプリケーションにアクセス:**
|
||||
|
||||
- ダッシュボード:http://localhost:3000
|
||||
- API:http://localhost:4000
|
||||
- ヘルスチェック:http://localhost:4000/health
|
||||
|
||||
## 機能
|
||||
|
||||
- **目標駆動開発** - 自然言語で目標を定義;コーディングエージェントがそれを達成するためのエージェントグラフと接続コードを生成
|
||||
- **自己適応エージェント** - フレームワークが障害をキャプチャし、目標を更新し、エージェントグラフを更新
|
||||
- **動的ノード接続** - 事前定義されたエッジなし;接続コードは目標に基づいて任意の対応LLMによって生成
|
||||
- **SDKラップノード** - すべてのノードが共有メモリ、ローカルRLMメモリ、モニタリング、ツール、LLMアクセスを標準装備
|
||||
- **ヒューマンインザループ** - 設定可能なタイムアウトとエスカレーションを備えた、人間の入力のために実行を一時停止する介入ノード
|
||||
- **リアルタイム可観測性** - エージェント実行、決定、ノード間通信のライブモニタリングのためのWebSocketストリーミング
|
||||
- **コストと予算管理** - 支出制限、スロットル、自動モデル劣化ポリシーを設定
|
||||
- **本番環境対応** - セルフホスト可能、スケールと信頼性のために構築
|
||||
|
||||
## なぜAdenか
|
||||
|
||||
従来のエージェントフレームワークでは、ワークフローを手動で設計し、エージェントの相互作用を定義し、障害を事後的に処理する必要があります。Adenはこのパラダイムを逆転させます—**結果を記述すれば、システムが自ら構築します**。
|
||||
|
||||
### Adenの優位性
|
||||
|
||||
| 従来のフレームワーク | Aden |
|
||||
|----------------------|------|
|
||||
| エージェントワークフローをハードコード | 自然言語で目標を記述 |
|
||||
| 手動でグラフを定義 | 自動生成されるエージェントグラフ |
|
||||
| 事後的なエラー処理 | プロアクティブな自己進化 |
|
||||
| 静的なツール設定 | 動的なSDKラップノード |
|
||||
| 別途モニタリング設定 | 組み込みのリアルタイム可観測性 |
|
||||
| DIY予算管理 | 統合されたコスト制御と劣化 |
|
||||
|
||||
### 仕組み
|
||||
|
||||
1. **目標を定義** → 達成したいことを平易な言葉で記述
|
||||
2. **コーディングエージェントが生成** → エージェントグラフ、接続コード、テストケースを作成
|
||||
3. **ワーカーが実行** → SDKラップノードが完全な可観測性とツールアクセスで実行
|
||||
4. **コントロールプレーンが監視** → リアルタイムメトリクス、予算執行、ポリシー管理
|
||||
5. **自己改善** → 障害時、システムがグラフを進化させ自動的に再デプロイ
|
||||
|
||||
## プロジェクト構造
|
||||
|
||||
```
|
||||
hive/
|
||||
├── honeycomb/ # フロントエンド (React + TypeScript + Vite)
|
||||
├── hive/ # バックエンド (Node.js + TypeScript + Express)
|
||||
├── docs/ # ドキュメント
|
||||
├── scripts/ # ビルドとユーティリティスクリプト
|
||||
├── config.yaml.example # 設定テンプレート
|
||||
└── docker-compose.yml # コンテナオーケストレーション
|
||||
```
|
||||
|
||||
## 開発
|
||||
|
||||
### ホットリロードでのローカル開発
|
||||
|
||||
```bash
|
||||
# 開発用オーバーライドをコピー
|
||||
cp docker-compose.override.yml.example docker-compose.override.yml
|
||||
|
||||
# ホットリロードを有効にして開始
|
||||
docker compose up
|
||||
```
|
||||
|
||||
### Dockerなしで実行
|
||||
|
||||
```bash
|
||||
# 依存関係をインストール
|
||||
npm install
|
||||
|
||||
# 環境ファイルを生成
|
||||
npm run generate:env
|
||||
|
||||
# フロントエンドを開始(honeycomb/内)
|
||||
cd honeycomb && npm run dev
|
||||
|
||||
# バックエンドを開始(hive/内)
|
||||
cd hive && npm run dev
|
||||
```
|
||||
|
||||
## ドキュメント
|
||||
|
||||
- **[開発者ガイド](DEVELOPER.md)** - 開発者向け総合ガイド
|
||||
- [はじめに](docs/getting-started.md) - クイックセットアップ手順
|
||||
- [設定ガイド](docs/configuration.md) - すべての設定オプション
|
||||
- [アーキテクチャ概要](docs/architecture.md) - システム設計と構造
|
||||
|
||||
## ロードマップ
|
||||
|
||||
Adenエージェントフレームワークは、開発者が結果志向で自己適応するエージェントを構築できるよう支援することを目指しています。ロードマップはこちらをご覧ください:
|
||||
|
||||
[ROADMAP.md](ROADMAP.md)
|
||||
|
||||
## コミュニティとサポート
|
||||
|
||||
サポート、機能リクエスト、コミュニティディスカッションには[Discord](https://discord.com/invite/MXE49hrKDk)を使用しています。
|
||||
|
||||
- Discord - [コミュニティに参加](https://discord.com/invite/MXE49hrKDk)
|
||||
- Twitter/X - [@adenhq](https://x.com/aden_hq)
|
||||
- LinkedIn - [会社ページ](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
## 貢献
|
||||
|
||||
貢献を歓迎します!ガイドラインについては[CONTRIBUTING.md](CONTRIBUTING.md)をご覧ください。
|
||||
|
||||
1. リポジトリをフォーク
|
||||
2. 機能ブランチを作成 (`git checkout -b feature/amazing-feature`)
|
||||
3. 変更をコミット (`git commit -m 'Add amazing feature'`)
|
||||
4. ブランチにプッシュ (`git push origin feature/amazing-feature`)
|
||||
5. プルリクエストを開く
|
||||
|
||||
## チームに参加
|
||||
|
||||
**採用中です!** エンジニアリング、リサーチ、マーケティングの役職で私たちに参加してください。
|
||||
|
||||
[オープンポジションを見る](https://jobs.adenhq.com/a8cec478-cdbc-473c-bbd4-f4b7027ec193/applicant)
|
||||
|
||||
## セキュリティ
|
||||
|
||||
セキュリティに関する懸念については、[SECURITY.md](SECURITY.md)をご覧ください。
|
||||
|
||||
## ライセンス
|
||||
|
||||
このプロジェクトはApache License 2.0の下でライセンスされています - 詳細は[LICENSE](LICENSE)ファイルをご覧ください。
|
||||
|
||||
## よくある質問 (FAQ)
|
||||
|
||||
**Q: AdenはLangChainや他のエージェントフレームワークに依存していますか?**
|
||||
|
||||
いいえ。AdenはLangChain、CrewAI、その他のエージェントフレームワークに依存せずにゼロから構築されています。フレームワークは軽量で柔軟に設計されており、事前定義されたコンポーネントに依存するのではなく、エージェントグラフを動的に生成します。
|
||||
|
||||
**Q: AdenはどのLLMプロバイダーをサポートしていますか?**
|
||||
|
||||
AdenはOpenAI(GPT-4、GPT-4o)、Anthropic(Claudeモデル)、Google Geminiを標準でサポートしています。アーキテクチャはSDK抽象化によりプロバイダー非依存であり、拡張モデルサポートのためのLiteLLM統合がロードマップにあります。
|
||||
|
||||
**Q: Adenはオープンソースですか?**
|
||||
|
||||
はい、AdenはApache License 2.0の下で完全にオープンソースです。コミュニティの貢献とコラボレーションを積極的に奨励しています。
|
||||
|
||||
**Q: Adenはどのデプロイオプションをサポートしていますか?**
|
||||
|
||||
Adenは本番環境と開発環境の両方の設定でDocker Composeデプロイを標準でサポートしています。セルフホストデプロイはDockerをサポートする任意のインフラストラクチャで動作します。クラウドデプロイオプションとKubernetes対応設定はロードマップにあります。
|
||||
|
||||
**Q: Adenは複雑な本番規模のユースケースを処理できますか?**
|
||||
|
||||
はい。Adenは自動障害回復、リアルタイム可観測性、コスト制御、水平スケーリングサポートなどの機能を備え、本番環境向けに明示的に設計されています。フレームワークは単純な自動化から複雑なマルチエージェントワークフローまで処理できます。
|
||||
|
||||
**Q: Adenはヒューマンインザループワークフローをサポートしていますか?**
|
||||
|
||||
はい、Adenは人間の入力のために実行を一時停止する介入ノードを通じて、ヒューマンインザループワークフローを完全にサポートしています。設定可能なタイムアウトとエスカレーションポリシーが含まれており、人間の専門家とAIエージェントのシームレスなコラボレーションを可能にします。
|
||||
|
||||
**Q: Adenに貢献するにはどうすればよいですか?**
|
||||
|
||||
貢献を歓迎します!リポジトリをフォークし、機能ブランチを作成し、変更を実装して、プルリクエストを送信してください。詳細なガイドラインについては[CONTRIBUTING.md](CONTRIBUTING.md)をご覧ください。
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
サンフランシスコで 🔥 情熱を込めて作成
|
||||
</p>
|
||||
@@ -2,6 +2,15 @@
|
||||
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a> |
|
||||
<a href="README.zh-CN.md">简体中文</a> |
|
||||
<a href="README.es.md">Español</a> |
|
||||
<a href="README.pt.md">Português</a> |
|
||||
<a href="README.ja.md">日本語</a> |
|
||||
<a href="README.ru.md">Русский</a>
|
||||
</p>
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
@@ -29,6 +38,21 @@ Build reliable, self-improving AI agents without hardcoding workflows. Define yo
|
||||
|
||||
Visit [adenhq.com](https://adenhq.com) for complete documentation, examples, and guides.
|
||||
|
||||
## What is Aden
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
|
||||
</p>
|
||||
|
||||
Aden is a platform for building, deploying, operating, and adapting AI agents:
|
||||
|
||||
- **Build** - A Coding Agent generates specialized Worker Agents (Sales, Marketing, Ops) from natural language goals
|
||||
- **Deploy** - Headless deployment with CI/CD integration and full API lifecycle management
|
||||
- **Operate** - Real-time monitoring, observability, and runtime guardrails keep agents reliable
|
||||
- **Adapt** - Continuous evaluation, supervision, and adaptation ensure agents improve over time
|
||||
- **Infra** - Shared memory, LLM integrations, tools, and skills power every agent
|
||||
|
||||
|
||||
## Quick Links
|
||||
|
||||
- **[Documentation](https://docs.adenhq.com/)** - Complete guides and API reference
|
||||
@@ -290,11 +314,11 @@ No. Aden is built from the ground up with no dependencies on LangChain, CrewAI,
|
||||
|
||||
**Q: What LLM providers does Aden support?**
|
||||
|
||||
Aden supports OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), and Google Gemini out of the box. The architecture is provider-agnostic through SDK abstraction, with LiteLLM integration on the roadmap for expanded model support.
|
||||
Aden supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name.
|
||||
|
||||
**Q: Can I use Aden with local AI models like Ollama?**
|
||||
|
||||
Local model support through LiteLLM integration is on our roadmap. The SDK's provider-agnostic design means adding local model support will be straightforward once implemented.
|
||||
Yes! Aden supports local models through LiteLLM. Simply use the model name format `ollama/model-name` (e.g., `ollama/llama3`, `ollama/mistral`) and ensure Ollama is running locally.
|
||||
|
||||
**Q: What makes Aden different from other agent frameworks?**
|
||||
|
||||
|
||||
+243
@@ -0,0 +1,243 @@
|
||||
<p align="center">
|
||||
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a> |
|
||||
<a href="README.zh-CN.md">简体中文</a> |
|
||||
<a href="README.es.md">Español</a> |
|
||||
<a href="README.pt.md">Português</a> |
|
||||
<a href="README.ja.md">日本語</a> |
|
||||
<a href="README.ru.md">Русский</a>
|
||||
</p>
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Goal--Driven-Development-purple?style=flat-square" alt="Goal-Driven" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
<img src="https://img.shields.io/badge/Anthropic-supported-d4a574?style=flat-square" alt="Anthropic" />
|
||||
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
|
||||
<img src="https://img.shields.io/badge/MCP-19_Tools-00ADD8?style=flat-square" alt="MCP" />
|
||||
</p>
|
||||
|
||||
## Visão Geral
|
||||
|
||||
Construa agentes de IA confiáveis e auto-aperfeiçoáveis sem codificar fluxos de trabalho. Defina seu objetivo através de uma conversa com um agente de codificação, e o framework gera um grafo de nós com código de conexão criado dinamicamente. Quando algo quebra, o framework captura dados de falha, evolui o agente através do agente de codificação e reimplanta. Nós de intervenção humana integrados, gerenciamento de credenciais e monitoramento em tempo real dão a você controle sem sacrificar a adaptabilidade.
|
||||
|
||||
Visite [adenhq.com](https://adenhq.com) para documentação completa, exemplos e guias.
|
||||
|
||||
## O que é Aden
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
|
||||
</p>
|
||||
|
||||
Aden é uma plataforma para construir, implantar, operar e adaptar agentes de IA:
|
||||
|
||||
- **Construir** - Um Agente de Codificação gera Agentes de Trabalho especializados (Vendas, Marketing, Operações) a partir de objetivos em linguagem natural
|
||||
- **Implantar** - Implantação headless com integração CI/CD e gerenciamento completo do ciclo de vida de API
|
||||
- **Operar** - Monitoramento em tempo real, observabilidade e guardrails de runtime mantêm os agentes confiáveis
|
||||
- **Adaptar** - Avaliação contínua, supervisão e adaptação garantem que os agentes melhorem ao longo do tempo
|
||||
- **Infraestrutura** - Memória compartilhada, integrações LLM, ferramentas e habilidades alimentam cada agente
|
||||
|
||||
## Links Rápidos
|
||||
|
||||
- **[Documentação](https://docs.adenhq.com/)** - Guias completos e referência de API
|
||||
- **[Guia de Auto-Hospedagem](https://docs.adenhq.com/getting-started/quickstart)** - Implante o Hive em sua infraestrutura
|
||||
- **[Changelog](https://github.com/adenhq/hive/releases)** - Últimas atualizações e versões
|
||||
- **[Reportar Problemas](https://github.com/adenhq/hive/issues)** - Relatórios de bugs e solicitações de funcionalidades
|
||||
|
||||
## Início Rápido
|
||||
|
||||
### Pré-requisitos
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) (v20.10+)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) (v2.0+)
|
||||
|
||||
### Instalação
|
||||
|
||||
```bash
|
||||
# Clonar o repositório
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
cd hive
|
||||
|
||||
# Copiar e configurar
|
||||
cp config.yaml.example config.yaml
|
||||
|
||||
# Executar configuração e iniciar serviços
|
||||
npm run setup
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**Acessar a aplicação:**
|
||||
|
||||
- Dashboard: http://localhost:3000
|
||||
- API: http://localhost:4000
|
||||
- Health: http://localhost:4000/health
|
||||
|
||||
## Funcionalidades
|
||||
|
||||
- **Desenvolvimento Orientado a Objetivos** - Defina objetivos em linguagem natural; o agente de codificação gera o grafo de agentes e código de conexão para alcançá-los
|
||||
- **Agentes Auto-Adaptáveis** - Framework captura falhas, atualiza objetivos e atualiza o grafo de agentes
|
||||
- **Conexões de Nós Dinâmicas** - Sem arestas predefinidas; código de conexão é gerado por qualquer LLM capaz baseado em seus objetivos
|
||||
- **Nós Envolvidos em SDK** - Cada nó recebe memória compartilhada, memória RLM local, monitoramento, ferramentas e acesso LLM prontos para uso
|
||||
- **Humano no Loop** - Nós de intervenção que pausam a execução para entrada humana com timeouts e escalonamento configuráveis
|
||||
- **Observabilidade em Tempo Real** - Streaming WebSocket para monitoramento ao vivo de execução de agentes, decisões e comunicação entre nós
|
||||
- **Controle de Custo e Orçamento** - Defina limites de gastos, throttles e políticas de degradação automática de modelo
|
||||
- **Pronto para Produção** - Auto-hospedável, construído para escala e confiabilidade
|
||||
|
||||
## Por que Aden
|
||||
|
||||
Frameworks de agentes tradicionais exigem que você projete manualmente fluxos de trabalho, defina interações de agentes e lide com falhas reativamente. Aden inverte esse paradigma—**você descreve resultados, e o sistema se constrói sozinho**.
|
||||
|
||||
### A Vantagem Aden
|
||||
|
||||
| Frameworks Tradicionais | Aden |
|
||||
|-------------------------|------|
|
||||
| Codificar fluxos de trabalho de agentes | Descrever objetivos em linguagem natural |
|
||||
| Definição manual de grafos | Grafos de agentes auto-gerados |
|
||||
| Tratamento reativo de erros | Auto-evolução proativa |
|
||||
| Configurações de ferramentas estáticas | Nós dinâmicos envolvidos em SDK |
|
||||
| Configuração de monitoramento separada | Observabilidade em tempo real integrada |
|
||||
| Gerenciamento de orçamento DIY | Controles de custo e degradação integrados |
|
||||
|
||||
### Como Funciona
|
||||
|
||||
1. **Defina Seu Objetivo** → Descreva o que você quer alcançar em português simples
|
||||
2. **Agente de Codificação Gera** → Cria o grafo de agentes, código de conexão e casos de teste
|
||||
3. **Workers Executam** → Nós envolvidos em SDK executam com observabilidade completa e acesso a ferramentas
|
||||
4. **Plano de Controle Monitora** → Métricas em tempo real, aplicação de orçamento, gerenciamento de políticas
|
||||
5. **Auto-Aperfeiçoamento** → Em caso de falha, o sistema evolui o grafo e reimplanta automaticamente
|
||||
|
||||
## Estrutura do Projeto
|
||||
|
||||
```
|
||||
hive/
|
||||
├── honeycomb/ # Frontend (React + TypeScript + Vite)
|
||||
├── hive/ # Backend (Node.js + TypeScript + Express)
|
||||
├── docs/ # Documentação
|
||||
├── scripts/ # Scripts de build e utilitários
|
||||
├── config.yaml.example # Template de configuração
|
||||
└── docker-compose.yml # Orquestração de containers
|
||||
```
|
||||
|
||||
## Desenvolvimento
|
||||
|
||||
### Desenvolvimento Local com Hot Reload
|
||||
|
||||
```bash
|
||||
# Copiar overrides de desenvolvimento
|
||||
cp docker-compose.override.yml.example docker-compose.override.yml
|
||||
|
||||
# Iniciar com hot reload habilitado
|
||||
docker compose up
|
||||
```
|
||||
|
||||
### Executar Sem Docker
|
||||
|
||||
```bash
|
||||
# Instalar dependências
|
||||
npm install
|
||||
|
||||
# Gerar arquivos de ambiente
|
||||
npm run generate:env
|
||||
|
||||
# Iniciar frontend (em honeycomb/)
|
||||
cd honeycomb && npm run dev
|
||||
|
||||
# Iniciar backend (em hive/)
|
||||
cd hive && npm run dev
|
||||
```
|
||||
|
||||
## Documentação
|
||||
|
||||
- **[Guia do Desenvolvedor](DEVELOPER.md)** - Guia abrangente para desenvolvedores
|
||||
- [Começando](docs/getting-started.md) - Instruções de configuração rápida
|
||||
- [Guia de Configuração](docs/configuration.md) - Todas as opções de configuração
|
||||
- [Visão Geral da Arquitetura](docs/architecture.md) - Design e estrutura do sistema
|
||||
|
||||
## Roadmap
|
||||
|
||||
O Aden Agent Framework visa ajudar desenvolvedores a construir agentes auto-adaptativos orientados a resultados. Encontre nosso roadmap aqui:
|
||||
|
||||
[ROADMAP.md](ROADMAP.md)
|
||||
|
||||
## Comunidade e Suporte
|
||||
|
||||
Usamos [Discord](https://discord.com/invite/MXE49hrKDk) para suporte, solicitações de funcionalidades e discussões da comunidade.
|
||||
|
||||
- Discord - [Junte-se à nossa comunidade](https://discord.com/invite/MXE49hrKDk)
|
||||
- Twitter/X - [@adenhq](https://x.com/aden_hq)
|
||||
- LinkedIn - [Página da Empresa](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
## Contribuindo
|
||||
|
||||
Aceitamos contribuições! Por favor, consulte [CONTRIBUTING.md](CONTRIBUTING.md) para diretrizes.
|
||||
|
||||
1. Faça fork do repositório
|
||||
2. Crie sua branch de funcionalidade (`git checkout -b feature/amazing-feature`)
|
||||
3. Faça commit das suas alterações (`git commit -m 'Add amazing feature'`)
|
||||
4. Faça push para a branch (`git push origin feature/amazing-feature`)
|
||||
5. Abra um Pull Request
|
||||
|
||||
## Junte-se ao Nosso Time
|
||||
|
||||
**Estamos contratando!** Junte-se a nós em funções de engenharia, pesquisa e go-to-market.
|
||||
|
||||
[Ver Posições Abertas](https://jobs.adenhq.com/a8cec478-cdbc-473c-bbd4-f4b7027ec193/applicant)
|
||||
|
||||
## Segurança
|
||||
|
||||
Para questões de segurança, por favor consulte [SECURITY.md](SECURITY.md).
|
||||
|
||||
## Licença
|
||||
|
||||
Este projeto está licenciado sob a Licença Apache 2.0 - veja o arquivo [LICENSE](LICENSE) para detalhes.
|
||||
|
||||
## Perguntas Frequentes (FAQ)
|
||||
|
||||
**P: O Aden depende do LangChain ou outros frameworks de agentes?**
|
||||
|
||||
Não. O Aden é construído do zero sem dependências do LangChain, CrewAI ou outros frameworks de agentes. O framework é projetado para ser leve e flexível, gerando grafos de agentes dinamicamente em vez de depender de componentes predefinidos.
|
||||
|
||||
**P: Quais provedores de LLM o Aden suporta?**
|
||||
|
||||
O Aden suporta OpenAI (GPT-4, GPT-4o), Anthropic (modelos Claude) e Google Gemini prontos para uso. A arquitetura é agnóstica de provedor através da abstração do SDK, com integração LiteLLM no roadmap para suporte expandido de modelos.
|
||||
|
||||
**P: O Aden é open-source?**
|
||||
|
||||
Sim, o Aden é totalmente open-source sob a Licença Apache 2.0. Incentivamos ativamente contribuições e colaboração da comunidade.
|
||||
|
||||
**P: Quais opções de implantação o Aden suporta?**
|
||||
|
||||
O Aden suporta implantação Docker Compose pronta para uso, com configurações de produção e desenvolvimento. Implantações auto-hospedadas funcionam em qualquer infraestrutura que suporte Docker. Opções de implantação em nuvem e configurações prontas para Kubernetes estão no roadmap.
|
||||
|
||||
**P: O Aden pode lidar com casos de uso complexos em escala de produção?**
|
||||
|
||||
Sim. O Aden é explicitamente projetado para ambientes de produção com recursos como recuperação automática de falhas, observabilidade em tempo real, controles de custo e suporte a escalonamento horizontal. O framework lida tanto com automações simples quanto com fluxos de trabalho complexos multi-agente.
|
||||
|
||||
**P: O Aden suporta fluxos de trabalho com humano no loop?**
|
||||
|
||||
Sim, o Aden suporta totalmente fluxos de trabalho com humano no loop através de nós de intervenção que pausam a execução para entrada humana. Estes incluem timeouts configuráveis e políticas de escalonamento, permitindo colaboração perfeita entre especialistas humanos e agentes de IA.
|
||||
|
||||
**P: Como posso contribuir para o Aden?**
|
||||
|
||||
Contribuições são bem-vindas! Faça fork do repositório, crie sua branch de funcionalidade, implemente suas alterações e envie um pull request. Consulte [CONTRIBUTING.md](CONTRIBUTING.md) para diretrizes detalhadas.
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
Feito com 🔥 Paixão em San Francisco
|
||||
</p>
|
||||
+243
@@ -0,0 +1,243 @@
|
||||
<p align="center">
|
||||
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a> |
|
||||
<a href="README.zh-CN.md">简体中文</a> |
|
||||
<a href="README.es.md">Español</a> |
|
||||
<a href="README.pt.md">Português</a> |
|
||||
<a href="README.ja.md">日本語</a> |
|
||||
<a href="README.ru.md">Русский</a>
|
||||
</p>
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Goal--Driven-Development-purple?style=flat-square" alt="Goal-Driven" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
<img src="https://img.shields.io/badge/Anthropic-supported-d4a574?style=flat-square" alt="Anthropic" />
|
||||
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
|
||||
<img src="https://img.shields.io/badge/MCP-19_Tools-00ADD8?style=flat-square" alt="MCP" />
|
||||
</p>
|
||||
|
||||
## Обзор
|
||||
|
||||
Создавайте надёжных, самосовершенствующихся ИИ-агентов без жёсткого кодирования рабочих процессов. Определите свою цель через разговор с кодирующим агентом, и фреймворк сгенерирует граф узлов с динамически созданным кодом соединений. Когда что-то ломается, фреймворк захватывает данные об ошибке, эволюционирует агента через кодирующего агента и переразвёртывает. Встроенные узлы человеческого вмешательства, управление учётными данными и мониторинг в реальном времени дают вам контроль без ущерба для адаптивности.
|
||||
|
||||
Посетите [adenhq.com](https://adenhq.com) для полной документации, примеров и руководств.
|
||||
|
||||
## Что такое Aden
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
|
||||
</p>
|
||||
|
||||
Aden — это платформа для создания, развёртывания, эксплуатации и адаптации ИИ-агентов:
|
||||
|
||||
- **Создание** - Кодирующий агент генерирует специализированных рабочих агентов (продажи, маркетинг, операции) из целей на естественном языке
|
||||
- **Развёртывание** - Headless-развёртывание с интеграцией CI/CD и полным управлением жизненным циклом API
|
||||
- **Эксплуатация** - Мониторинг в реальном времени, наблюдаемость и защитные барьеры времени выполнения обеспечивают надёжность агентов
|
||||
- **Адаптация** - Непрерывная оценка, контроль и адаптация гарантируют улучшение агентов со временем
|
||||
- **Инфраструктура** - Общая память, интеграции LLM, инструменты и навыки питают каждого агента
|
||||
|
||||
## Быстрые ссылки
|
||||
|
||||
- **[Документация](https://docs.adenhq.com/)** - Полные руководства и справочник API
|
||||
- **[Руководство по самостоятельному хостингу](https://docs.adenhq.com/getting-started/quickstart)** - Разверните Hive в своей инфраструктуре
|
||||
- **[История изменений](https://github.com/adenhq/hive/releases)** - Последние обновления и релизы
|
||||
- **[Сообщить о проблеме](https://github.com/adenhq/hive/issues)** - Отчёты об ошибках и запросы функций
|
||||
|
||||
## Быстрый старт
|
||||
|
||||
### Предварительные требования
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) (v20.10+)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) (v2.0+)
|
||||
|
||||
### Установка
|
||||
|
||||
```bash
|
||||
# Клонировать репозиторий
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
cd hive
|
||||
|
||||
# Скопировать и настроить
|
||||
cp config.yaml.example config.yaml
|
||||
|
||||
# Запустить настройку и запустить сервисы
|
||||
npm run setup
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**Доступ к приложению:**
|
||||
|
||||
- Панель управления: http://localhost:3000
|
||||
- API: http://localhost:4000
|
||||
- Проверка здоровья: http://localhost:4000/health
|
||||
|
||||
## Функции
|
||||
|
||||
- **Целеориентированная разработка** - Определяйте цели на естественном языке; кодирующий агент генерирует граф агентов и код соединений для их достижения
|
||||
- **Самоадаптирующиеся агенты** - Фреймворк захватывает сбои, обновляет цели и обновляет граф агентов
|
||||
- **Динамические соединения узлов** - Без предопределённых рёбер; код соединений генерируется любым способным LLM на основе ваших целей
|
||||
- **Узлы, обёрнутые SDK** - Каждый узел получает общую память, локальную RLM-память, мониторинг, инструменты и доступ к LLM из коробки
|
||||
- **Человек в контуре** - Узлы вмешательства, которые приостанавливают выполнение для человеческого ввода с настраиваемыми таймаутами и эскалацией
|
||||
- **Наблюдаемость в реальном времени** - WebSocket-стриминг для живого мониторинга выполнения агентов, решений и межузловой коммуникации
|
||||
- **Контроль затрат и бюджета** - Устанавливайте лимиты расходов, ограничения и политики автоматической деградации модели
|
||||
- **Готовность к продакшену** - Возможность самостоятельного хостинга, создан для масштабирования и надёжности
|
||||
|
||||
## Почему Aden
|
||||
|
||||
Традиционные фреймворки агентов требуют ручного проектирования рабочих процессов, определения взаимодействий агентов и реактивной обработки сбоев. Aden переворачивает эту парадигму — **вы описываете результаты, и система строит себя сама**.
|
||||
|
||||
### Преимущество Aden
|
||||
|
||||
| Традиционные фреймворки | Aden |
|
||||
|-------------------------|------|
|
||||
| Жёсткое кодирование рабочих процессов | Описание целей на естественном языке |
|
||||
| Ручное определение графов | Автоматически генерируемые графы агентов |
|
||||
| Реактивная обработка ошибок | Проактивная самоэволюция |
|
||||
| Статические конфигурации инструментов | Динамические узлы, обёрнутые SDK |
|
||||
| Отдельная настройка мониторинга | Встроенная наблюдаемость в реальном времени |
|
||||
| DIY управление бюджетом | Интегрированный контроль затрат и деградация |
|
||||
|
||||
### Как это работает
|
||||
|
||||
1. **Определите цель** → Опишите, чего хотите достичь, простым языком
|
||||
2. **Кодирующий агент генерирует** → Создаёт граф агентов, код соединений и тестовые случаи
|
||||
3. **Рабочие выполняют** → Узлы, обёрнутые SDK, работают с полной наблюдаемостью и доступом к инструментам
|
||||
4. **Плоскость управления мониторит** → Метрики в реальном времени, применение бюджета, управление политиками
|
||||
5. **Самосовершенствование** → При сбое система эволюционирует граф и автоматически переразвёртывает
|
||||
|
||||
## Структура проекта
|
||||
|
||||
```
|
||||
hive/
|
||||
├── honeycomb/ # Фронтенд (React + TypeScript + Vite)
|
||||
├── hive/ # Бэкенд (Node.js + TypeScript + Express)
|
||||
├── docs/ # Документация
|
||||
├── scripts/ # Скрипты сборки и утилиты
|
||||
├── config.yaml.example # Шаблон конфигурации
|
||||
└── docker-compose.yml # Оркестрация контейнеров
|
||||
```
|
||||
|
||||
## Разработка
|
||||
|
||||
### Локальная разработка с горячей перезагрузкой
|
||||
|
||||
```bash
|
||||
# Скопировать переопределения для разработки
|
||||
cp docker-compose.override.yml.example docker-compose.override.yml
|
||||
|
||||
# Запустить с включённой горячей перезагрузкой
|
||||
docker compose up
|
||||
```
|
||||
|
||||
### Запуск без Docker
|
||||
|
||||
```bash
|
||||
# Установить зависимости
|
||||
npm install
|
||||
|
||||
# Сгенерировать файлы окружения
|
||||
npm run generate:env
|
||||
|
||||
# Запустить фронтенд (в honeycomb/)
|
||||
cd honeycomb && npm run dev
|
||||
|
||||
# Запустить бэкенд (в hive/)
|
||||
cd hive && npm run dev
|
||||
```
|
||||
|
||||
## Документация
|
||||
|
||||
- **[Руководство разработчика](DEVELOPER.md)** - Полное руководство для разработчиков
|
||||
- [Начало работы](docs/getting-started.md) - Инструкции по быстрой настройке
|
||||
- [Руководство по конфигурации](docs/configuration.md) - Все опции конфигурации
|
||||
- [Обзор архитектуры](docs/architecture.md) - Дизайн и структура системы
|
||||
|
||||
## Дорожная карта
|
||||
|
||||
Aden Agent Framework призван помочь разработчикам создавать самоадаптирующихся агентов, ориентированных на результат. Найдите нашу дорожную карту здесь:
|
||||
|
||||
[ROADMAP.md](ROADMAP.md)
|
||||
|
||||
## Сообщество и поддержка
|
||||
|
||||
Мы используем [Discord](https://discord.com/invite/MXE49hrKDk) для поддержки, запросов функций и обсуждений сообщества.
|
||||
|
||||
- Discord - [Присоединиться к сообществу](https://discord.com/invite/MXE49hrKDk)
|
||||
- Twitter/X - [@adenhq](https://x.com/aden_hq)
|
||||
- LinkedIn - [Страница компании](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
## Участие в разработке
|
||||
|
||||
Мы приветствуем вклад! Пожалуйста, ознакомьтесь с [CONTRIBUTING.md](CONTRIBUTING.md) для руководств.
|
||||
|
||||
1. Сделайте форк репозитория
|
||||
2. Создайте ветку функции (`git checkout -b feature/amazing-feature`)
|
||||
3. Зафиксируйте изменения (`git commit -m 'Add amazing feature'`)
|
||||
4. Отправьте в ветку (`git push origin feature/amazing-feature`)
|
||||
5. Откройте Pull Request
|
||||
|
||||
## Присоединяйтесь к команде
|
||||
|
||||
**Мы нанимаем!** Присоединяйтесь к нам на позициях в инженерии, исследованиях и выходе на рынок.
|
||||
|
||||
[Посмотреть открытые позиции](https://jobs.adenhq.com/a8cec478-cdbc-473c-bbd4-f4b7027ec193/applicant)
|
||||
|
||||
## Безопасность
|
||||
|
||||
По вопросам безопасности, пожалуйста, обратитесь к [SECURITY.md](SECURITY.md).
|
||||
|
||||
## Лицензия
|
||||
|
||||
Этот проект лицензирован под лицензией Apache 2.0 - см. файл [LICENSE](LICENSE) для деталей.
|
||||
|
||||
## Часто задаваемые вопросы (FAQ)
|
||||
|
||||
**В: Зависит ли Aden от LangChain или других фреймворков агентов?**
|
||||
|
||||
Нет. Aden построен с нуля без зависимостей от LangChain, CrewAI или других фреймворков агентов. Фреймворк разработан лёгким и гибким, динамически генерируя графы агентов вместо того, чтобы полагаться на предопределённые компоненты.
|
||||
|
||||
**В: Каких провайдеров LLM поддерживает Aden?**
|
||||
|
||||
Aden поддерживает OpenAI (GPT-4, GPT-4o), Anthropic (модели Claude) и Google Gemini из коробки. Архитектура не зависит от провайдера через абстракцию SDK, с интеграцией LiteLLM в дорожной карте для расширенной поддержки моделей.
|
||||
|
||||
**В: Aden с открытым исходным кодом?**
|
||||
|
||||
Да, Aden полностью с открытым исходным кодом под лицензией Apache 2.0. Мы активно поощряем вклад и сотрудничество сообщества.
|
||||
|
||||
**В: Какие варианты развёртывания поддерживает Aden?**
|
||||
|
||||
Aden поддерживает развёртывание Docker Compose из коробки, с конфигурациями для продакшена и разработки. Самостоятельное развёртывание работает на любой инфраструктуре, поддерживающей Docker. Варианты облачного развёртывания и конфигурации, готовые для Kubernetes, находятся в дорожной карте.
|
||||
|
||||
**В: Может ли Aden справиться со сложными случаями использования продакшен-масштаба?**
|
||||
|
||||
Да. Aden явно разработан для продакшен-сред с такими функциями, как автоматическое восстановление после сбоев, наблюдаемость в реальном времени, контроль затрат и поддержка горизонтального масштабирования. Фреймворк справляется как с простой автоматизацией, так и со сложными многоагентными рабочими процессами.
|
||||
|
||||
**В: Поддерживает ли Aden рабочие процессы с человеком в контуре?**
|
||||
|
||||
Да, Aden полностью поддерживает рабочие процессы с человеком в контуре через узлы вмешательства, которые приостанавливают выполнение для человеческого ввода. Они включают настраиваемые таймауты и политики эскалации, обеспечивая бесшовное сотрудничество между экспертами-людьми и ИИ-агентами.
|
||||
|
||||
**В: Как я могу внести вклад в Aden?**
|
||||
|
||||
Вклады приветствуются! Сделайте форк репозитория, создайте ветку функции, реализуйте изменения и отправьте pull request. Подробные руководства см. в [CONTRIBUTING.md](CONTRIBUTING.md).
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
Сделано с 🔥 Страстью в Сан-Франциско
|
||||
</p>
|
||||
+243
@@ -0,0 +1,243 @@
|
||||
<p align="center">
|
||||
<img width="100%" alt="Hive Banner" src="https://storage.googleapis.com/aden-prod-assets/website/aden-title-card.png" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="README.md">English</a> |
|
||||
<a href="README.zh-CN.md">简体中文</a> |
|
||||
<a href="README.es.md">Español</a> |
|
||||
<a href="README.pt.md">Português</a> |
|
||||
<a href="README.ja.md">日本語</a> |
|
||||
<a href="README.ru.md">Русский</a>
|
||||
</p>
|
||||
|
||||
[](https://github.com/adenhq/hive/blob/main/LICENSE)
|
||||
[](https://www.ycombinator.com/companies/aden)
|
||||
[](https://hub.docker.com/u/adenhq)
|
||||
[](https://discord.com/invite/MXE49hrKDk)
|
||||
[](https://x.com/aden_hq)
|
||||
[](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/AI_Agents-Self--Improving-brightgreen?style=flat-square" alt="AI Agents" />
|
||||
<img src="https://img.shields.io/badge/Multi--Agent-Systems-blue?style=flat-square" alt="Multi-Agent" />
|
||||
<img src="https://img.shields.io/badge/Goal--Driven-Development-purple?style=flat-square" alt="Goal-Driven" />
|
||||
<img src="https://img.shields.io/badge/Human--in--the--Loop-orange?style=flat-square" alt="HITL" />
|
||||
<img src="https://img.shields.io/badge/Production--Ready-red?style=flat-square" alt="Production" />
|
||||
</p>
|
||||
<p align="center">
|
||||
<img src="https://img.shields.io/badge/OpenAI-supported-412991?style=flat-square&logo=openai" alt="OpenAI" />
|
||||
<img src="https://img.shields.io/badge/Anthropic-supported-d4a574?style=flat-square" alt="Anthropic" />
|
||||
<img src="https://img.shields.io/badge/Google_Gemini-supported-4285F4?style=flat-square&logo=google" alt="Gemini" />
|
||||
<img src="https://img.shields.io/badge/MCP-19_Tools-00ADD8?style=flat-square" alt="MCP" />
|
||||
</p>
|
||||
|
||||
## 概述
|
||||
|
||||
构建可靠的、自我改进的 AI 智能体,无需硬编码工作流。通过与编码智能体对话来定义目标,框架会生成带有动态创建连接代码的节点图。当出现问题时,框架会捕获故障数据,通过编码智能体进化智能体,并重新部署。内置的人机协作节点、凭证管理和实时监控让您在保持适应性的同时拥有完全控制权。
|
||||
|
||||
访问 [adenhq.com](https://adenhq.com) 获取完整文档、示例和指南。
|
||||
|
||||
## 什么是 Aden
|
||||
|
||||
<p align="center">
|
||||
<img width="100%" alt="Aden Architecture" src="docs/assets/aden-architecture-diagram.jpg" />
|
||||
</p>
|
||||
|
||||
Aden 是一个用于构建、部署、运营和适应 AI 智能体的平台:
|
||||
|
||||
- **构建** - 编码智能体根据自然语言目标生成专业的工作智能体(销售、营销、运营)
|
||||
- **部署** - 无头部署,支持 CI/CD 集成和完整的 API 生命周期管理
|
||||
- **运营** - 实时监控、可观测性和运行时护栏确保智能体可靠运行
|
||||
- **适应** - 持续评估、监督和适应确保智能体随时间改进
|
||||
- **基础设施** - 共享内存、LLM 集成、工具和技能为每个智能体提供支持
|
||||
|
||||
## 快速链接
|
||||
|
||||
- **[文档](https://docs.adenhq.com/)** - 完整指南和 API 参考
|
||||
- **[自托管指南](https://docs.adenhq.com/getting-started/quickstart)** - 在您的基础设施上部署 Hive
|
||||
- **[更新日志](https://github.com/adenhq/hive/releases)** - 最新更新和版本
|
||||
- **[报告问题](https://github.com/adenhq/hive/issues)** - Bug 报告和功能请求
|
||||
|
||||
## 快速开始
|
||||
|
||||
### 前置要求
|
||||
|
||||
- [Docker](https://docs.docker.com/get-docker/) (v20.10+)
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/) (v2.0+)
|
||||
|
||||
### 安装
|
||||
|
||||
```bash
|
||||
# 克隆仓库
|
||||
git clone https://github.com/adenhq/hive.git
|
||||
cd hive
|
||||
|
||||
# 复制并配置
|
||||
cp config.yaml.example config.yaml
|
||||
|
||||
# 运行设置并启动服务
|
||||
npm run setup
|
||||
docker compose up
|
||||
```
|
||||
|
||||
**访问应用:**
|
||||
|
||||
- 仪表板:http://localhost:3000
|
||||
- API:http://localhost:4000
|
||||
- 健康检查:http://localhost:4000/health
|
||||
|
||||
## 功能特性
|
||||
|
||||
- **目标驱动开发** - 用自然语言定义目标;编码智能体生成智能体图和连接代码来实现它们
|
||||
- **自适应智能体** - 框架捕获故障,更新目标并更新智能体图
|
||||
- **动态节点连接** - 没有预定义边;连接代码由任何有能力的 LLM 根据您的目标生成
|
||||
- **SDK 封装节点** - 每个节点开箱即用地获得共享内存、本地 RLM 内存、监控、工具和 LLM 访问
|
||||
- **人机协作** - 干预节点暂停执行以等待人工输入,支持可配置的超时和升级
|
||||
- **实时可观测性** - WebSocket 流式传输用于实时监控智能体执行、决策和节点间通信
|
||||
- **成本与预算控制** - 设置支出限制、节流和自动模型降级策略
|
||||
- **生产就绪** - 可自托管,为规模和可靠性而构建
|
||||
|
||||
## 为什么选择 Aden
|
||||
|
||||
传统智能体框架要求您手动设计工作流、定义智能体交互并被动处理故障。Aden 颠覆了这一范式——**您描述结果,系统自动构建自己**。
|
||||
|
||||
### Aden 的优势
|
||||
|
||||
| 传统框架 | Aden |
|
||||
|----------|------|
|
||||
| 硬编码智能体工作流 | 用自然语言描述目标 |
|
||||
| 手动定义图 | 自动生成智能体图 |
|
||||
| 被动错误处理 | 主动自我进化 |
|
||||
| 静态工具配置 | 动态 SDK 封装节点 |
|
||||
| 单独设置监控 | 内置实时可观测性 |
|
||||
| DIY 预算管理 | 集成成本控制和降级 |
|
||||
|
||||
### 工作原理
|
||||
|
||||
1. **定义目标** → 用简单英语描述您想要实现的目标
|
||||
2. **编码智能体生成** → 创建智能体图、连接代码和测试用例
|
||||
3. **工作节点执行** → SDK 封装节点以完全可观测性和工具访问运行
|
||||
4. **控制平面监控** → 实时指标、预算执行、策略管理
|
||||
5. **自我改进** → 失败时,系统进化图并自动重新部署
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
hive/
|
||||
├── honeycomb/ # 前端 (React + TypeScript + Vite)
|
||||
├── hive/ # 后端 (Node.js + TypeScript + Express)
|
||||
├── docs/ # 文档
|
||||
├── scripts/ # 构建和实用脚本
|
||||
├── config.yaml.example # 配置模板
|
||||
└── docker-compose.yml # 容器编排
|
||||
```
|
||||
|
||||
## 开发
|
||||
|
||||
### 带热重载的本地开发
|
||||
|
||||
```bash
|
||||
# 复制开发覆盖配置
|
||||
cp docker-compose.override.yml.example docker-compose.override.yml
|
||||
|
||||
# 启用热重载启动
|
||||
docker compose up
|
||||
```
|
||||
|
||||
### 不使用 Docker 运行
|
||||
|
||||
```bash
|
||||
# 安装依赖
|
||||
npm install
|
||||
|
||||
# 生成环境文件
|
||||
npm run generate:env
|
||||
|
||||
# 启动前端(在 honeycomb/ 目录)
|
||||
cd honeycomb && npm run dev
|
||||
|
||||
# 启动后端(在 hive/ 目录)
|
||||
cd hive && npm run dev
|
||||
```
|
||||
|
||||
## 文档
|
||||
|
||||
- **[开发者指南](DEVELOPER.md)** - 开发者综合指南
|
||||
- [入门指南](docs/getting-started.md) - 快速设置说明
|
||||
- [配置指南](docs/configuration.md) - 所有配置选项
|
||||
- [架构概述](docs/architecture.md) - 系统设计和结构
|
||||
|
||||
## 路线图
|
||||
|
||||
Aden 智能体框架旨在帮助开发者构建面向结果的、自适应的智能体。请在此查看我们的路线图:
|
||||
|
||||
[ROADMAP.md](ROADMAP.md)
|
||||
|
||||
## 社区与支持
|
||||
|
||||
我们使用 [Discord](https://discord.com/invite/MXE49hrKDk) 进行支持、功能请求和社区讨论。
|
||||
|
||||
- Discord - [加入我们的社区](https://discord.com/invite/MXE49hrKDk)
|
||||
- Twitter/X - [@adenhq](https://x.com/aden_hq)
|
||||
- LinkedIn - [公司主页](https://www.linkedin.com/company/teamaden/)
|
||||
|
||||
## 贡献
|
||||
|
||||
我们欢迎贡献!请参阅 [CONTRIBUTING.md](CONTRIBUTING.md) 了解指南。
|
||||
|
||||
1. Fork 仓库
|
||||
2. 创建功能分支 (`git checkout -b feature/amazing-feature`)
|
||||
3. 提交更改 (`git commit -m 'Add amazing feature'`)
|
||||
4. 推送到分支 (`git push origin feature/amazing-feature`)
|
||||
5. 创建 Pull Request
|
||||
|
||||
## 加入我们的团队
|
||||
|
||||
**我们正在招聘!** 加入我们的工程、研究和市场推广团队。
|
||||
|
||||
[查看开放职位](https://jobs.adenhq.com/a8cec478-cdbc-473c-bbd4-f4b7027ec193/applicant)
|
||||
|
||||
## 安全
|
||||
|
||||
有关安全问题,请参阅 [SECURITY.md](SECURITY.md)。
|
||||
|
||||
## 许可证
|
||||
|
||||
本项目采用 Apache License 2.0 许可证 - 详情请参阅 [LICENSE](LICENSE) 文件。
|
||||
|
||||
## 常见问题 (FAQ)
|
||||
|
||||
**问:Aden 是否依赖 LangChain 或其他智能体框架?**
|
||||
|
||||
不。Aden 从头开始构建,不依赖 LangChain、CrewAI 或其他智能体框架。该框架设计精简灵活,动态生成智能体图而非依赖预定义组件。
|
||||
|
||||
**问:Aden 支持哪些 LLM 提供商?**
|
||||
|
||||
Aden 开箱即用支持 OpenAI(GPT-4、GPT-4o)、Anthropic(Claude 模型)和 Google Gemini。架构通过 SDK 抽象实现提供商无关,LiteLLM 集成在路线图中以扩展模型支持。
|
||||
|
||||
**问:Aden 是开源的吗?**
|
||||
|
||||
是的,Aden 在 Apache License 2.0 下完全开源。我们积极鼓励社区贡献和协作。
|
||||
|
||||
**问:Aden 支持哪些部署选项?**
|
||||
|
||||
Aden 开箱即用支持 Docker Compose 部署,包括生产和开发配置。自托管部署可在任何支持 Docker 的基础设施上运行。云部署选项和 Kubernetes 就绪配置在路线图中。
|
||||
|
||||
**问:Aden 能处理复杂的生产级用例吗?**
|
||||
|
||||
可以。Aden 明确为生产环境设计,具有自动故障恢复、实时可观测性、成本控制和水平扩展支持等功能。该框架可处理简单自动化和复杂的多智能体工作流。
|
||||
|
||||
**问:Aden 支持人机协作工作流吗?**
|
||||
|
||||
是的,Aden 通过干预节点完全支持人机协作工作流,这些节点会暂停执行以等待人工输入。包括可配置的超时和升级策略,实现人类专家与 AI 智能体的无缝协作。
|
||||
|
||||
**问:如何为 Aden 做贡献?**
|
||||
|
||||
欢迎贡献!Fork 仓库,创建功能分支,实现更改,然后提交 pull request。详细指南请参阅 [CONTRIBUTING.md](CONTRIBUTING.md)。
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
用 🔥 热情打造于旧金山
|
||||
</p>
|
||||
@@ -19,8 +19,6 @@ if TYPE_CHECKING:
|
||||
|
||||
# Import register_tools from each tool module
|
||||
from .example_tool import register_tools as register_example
|
||||
from .file_read_tool import register_tools as register_file_read
|
||||
from .file_write_tool import register_tools as register_file_write
|
||||
from .web_search_tool import register_tools as register_web_search
|
||||
from .web_scrape_tool import register_tools as register_web_scrape
|
||||
from .pdf_read_tool import register_tools as register_pdf_read
|
||||
@@ -53,8 +51,7 @@ def register_all_tools(
|
||||
"""
|
||||
# Tools that don't need credentials
|
||||
register_example(mcp)
|
||||
register_file_read(mcp)
|
||||
register_file_write(mcp)
|
||||
register_web_search(mcp)
|
||||
register_web_scrape(mcp)
|
||||
register_pdf_read(mcp)
|
||||
|
||||
@@ -73,8 +70,6 @@ def register_all_tools(
|
||||
|
||||
return [
|
||||
"example_tool",
|
||||
"file_read",
|
||||
"file_write",
|
||||
"web_search",
|
||||
"web_scrape",
|
||||
"pdf_read",
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
# File Read Tool
|
||||
|
||||
Read contents of local files with encoding support.
|
||||
|
||||
## Description
|
||||
|
||||
Use for reading configs, data files, source code, logs, or any text file. Returns file content along with path, name, size, and encoding metadata.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Type | Required | Default | Description |
|
||||
|----------|------|----------|---------|-------------|
|
||||
| `file_path` | str | Yes | - | Path to the file to read (absolute or relative) |
|
||||
| `encoding` | str | No | `utf-8` | File encoding (utf-8, latin-1, etc.) |
|
||||
| `max_size` | int | No | `10000000` | Maximum file size to read in bytes (default 10MB) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This tool does not require any environment variables.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Returns error dicts for common issues:
|
||||
- `File not found: <path>` - File does not exist
|
||||
- `Not a file: <path>` - Path points to a directory
|
||||
- `File too large: <size> bytes (max: <max_size>)` - File exceeds max_size limit
|
||||
- `Failed to decode file with encoding '<encoding>'` - Wrong encoding specified
|
||||
- `Permission denied: <path>` - No read access to file
|
||||
@@ -1,4 +0,0 @@
|
||||
"""File Read Tool - Read contents of local files."""
|
||||
from .file_read_tool import register_tools
|
||||
|
||||
__all__ = ["register_tools"]
|
||||
@@ -1,75 +0,0 @@
|
||||
"""
|
||||
File Read Tool - Read contents of local files.
|
||||
|
||||
Supports reading text files with various encodings.
|
||||
Returns file content along with metadata.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
|
||||
def register_tools(mcp: FastMCP) -> None:
|
||||
"""Register file read tools with the MCP server."""
|
||||
|
||||
@mcp.tool()
|
||||
def file_read(
|
||||
file_path: str,
|
||||
encoding: str = "utf-8",
|
||||
max_size: int = 10_000_000,
|
||||
) -> dict:
|
||||
"""
|
||||
Read the contents of a local file.
|
||||
|
||||
Use for reading configs, data files, source code, logs, or any text file.
|
||||
Returns file content along with path, name, size, and encoding.
|
||||
|
||||
Args:
|
||||
file_path: Path to the file to read (absolute or relative)
|
||||
encoding: File encoding (utf-8, latin-1, etc.)
|
||||
max_size: Maximum file size to read in bytes (default 10MB)
|
||||
|
||||
Returns:
|
||||
Dict with file content and metadata, or error dict
|
||||
"""
|
||||
try:
|
||||
path = Path(file_path).resolve()
|
||||
|
||||
# Check if file exists
|
||||
if not path.exists():
|
||||
return {"error": f"File not found: {file_path}"}
|
||||
|
||||
# Check if it's a file (not directory)
|
||||
if not path.is_file():
|
||||
return {"error": f"Not a file: {file_path}"}
|
||||
|
||||
# Check file size
|
||||
file_size = path.stat().st_size
|
||||
if max_size > 0 and file_size > max_size:
|
||||
return {
|
||||
"error": f"File too large: {file_size} bytes (max: {max_size})",
|
||||
"file_size": file_size,
|
||||
}
|
||||
|
||||
# Read the file
|
||||
content = path.read_text(encoding=encoding)
|
||||
|
||||
return {
|
||||
"path": str(path),
|
||||
"name": path.name,
|
||||
"content": content,
|
||||
"size": len(content),
|
||||
"encoding": encoding,
|
||||
}
|
||||
|
||||
except UnicodeDecodeError as e:
|
||||
return {
|
||||
"error": f"Failed to decode file with encoding '{encoding}': {str(e)}",
|
||||
"suggestion": "Try a different encoding like 'latin-1' or 'cp1252'",
|
||||
}
|
||||
except PermissionError:
|
||||
return {"error": f"Permission denied: {file_path}"}
|
||||
except Exception as e:
|
||||
return {"error": f"Failed to read file: {str(e)}"}
|
||||
@@ -1,6 +1,7 @@
|
||||
import os
|
||||
|
||||
WORKSPACES_DIR = os.path.abspath(os.path.join(os.getcwd(), "workdir/workspaces"))
|
||||
# Use user home directory for workspaces
|
||||
WORKSPACES_DIR = os.path.expanduser("~/.hive/workdir/workspaces")
|
||||
|
||||
def get_secure_path(path: str, workspace_id: str, agent_id: str, session_id: str) -> str:
|
||||
"""Resolve and verify a path within a 3-layer sandbox (workspace/agent/session)."""
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
# File Write Tool
|
||||
|
||||
Write content to local files with encoding support.
|
||||
|
||||
## Description
|
||||
|
||||
Can create new files or overwrite/append to existing ones. Use for saving data, creating configs, writing reports, or exporting results. Optionally creates parent directories if they don't exist.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Type | Required | Default | Description |
|
||||
|----------|------|----------|---------|-------------|
|
||||
| `file_path` | str | Yes | - | Path to the file to write (absolute or relative) |
|
||||
| `content` | str | Yes | - | Content to write to the file |
|
||||
| `encoding` | str | No | `utf-8` | File encoding (utf-8, latin-1, etc.) |
|
||||
| `mode` | str | No | `write` | Write mode - 'write' (overwrite) or 'append' |
|
||||
| `create_dirs` | bool | No | `True` | Create parent directories if they don't exist |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This tool does not require any environment variables.
|
||||
|
||||
## Error Handling
|
||||
|
||||
Returns error dicts for common issues:
|
||||
- `Parent directory does not exist: <path>` - Parent dir missing and create_dirs=False
|
||||
- `Invalid mode: <mode>. Use 'write' or 'append'.` - Invalid mode specified
|
||||
- `Permission denied: <path>` - No write access to file/directory
|
||||
- `OS error writing file: <error>` - Filesystem error
|
||||
@@ -1,4 +0,0 @@
|
||||
"""File Write Tool - Create or update local files."""
|
||||
from .file_write_tool import register_tools
|
||||
|
||||
__all__ = ["register_tools"]
|
||||
@@ -1,83 +0,0 @@
|
||||
"""
|
||||
File Write Tool - Create or update local files.
|
||||
|
||||
Supports writing text files with various encodings.
|
||||
Can create directories if they don't exist.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
from fastmcp import FastMCP
|
||||
|
||||
|
||||
def register_tools(mcp: FastMCP) -> None:
|
||||
"""Register file write tools with the MCP server."""
|
||||
|
||||
@mcp.tool()
|
||||
def file_write(
|
||||
file_path: str,
|
||||
content: str,
|
||||
encoding: str = "utf-8",
|
||||
mode: str = "write",
|
||||
create_dirs: bool = True,
|
||||
) -> dict:
|
||||
"""
|
||||
Write content to a local file.
|
||||
|
||||
Can create new files or overwrite/append to existing ones.
|
||||
Use for saving data, creating configs, writing reports, or exporting results.
|
||||
|
||||
Args:
|
||||
file_path: Path to the file to write (absolute or relative)
|
||||
content: Content to write to the file
|
||||
encoding: File encoding (utf-8, latin-1, etc.)
|
||||
mode: Write mode - 'write' (overwrite) or 'append'
|
||||
create_dirs: Create parent directories if they don't exist
|
||||
|
||||
Returns:
|
||||
Dict with write result or error dict
|
||||
"""
|
||||
try:
|
||||
path = Path(file_path).resolve()
|
||||
|
||||
# Create parent directories if requested
|
||||
if create_dirs:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
elif not path.parent.exists():
|
||||
return {"error": f"Parent directory does not exist: {path.parent}"}
|
||||
|
||||
# Determine write mode
|
||||
if mode == "append":
|
||||
write_mode = "a"
|
||||
elif mode == "write":
|
||||
write_mode = "w"
|
||||
else:
|
||||
return {"error": f"Invalid mode: {mode}. Use 'write' or 'append'."}
|
||||
|
||||
# Check if we're overwriting
|
||||
existed = path.exists()
|
||||
previous_size = path.stat().st_size if existed else 0
|
||||
|
||||
# Write the file
|
||||
with open(path, write_mode, encoding=encoding) as f:
|
||||
f.write(content)
|
||||
|
||||
new_size = path.stat().st_size
|
||||
|
||||
return {
|
||||
"path": str(path),
|
||||
"name": path.name,
|
||||
"bytes_written": len(content.encode(encoding)),
|
||||
"total_size": new_size,
|
||||
"mode": mode,
|
||||
"created": not existed,
|
||||
"previous_size": previous_size if existed else None,
|
||||
}
|
||||
|
||||
except PermissionError:
|
||||
return {"error": f"Permission denied: {file_path}"}
|
||||
except OSError as e:
|
||||
return {"error": f"OS error writing file: {str(e)}"}
|
||||
except Exception as e:
|
||||
return {"error": f"Failed to write file: {str(e)}"}
|
||||
@@ -1,96 +0,0 @@
|
||||
"""Tests for file_read tool (FastMCP)."""
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from fastmcp import FastMCP
|
||||
from aden_tools.tools.file_read_tool import register_tools
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def file_read_fn(mcp: FastMCP):
|
||||
"""Register and return the file_read tool function."""
|
||||
register_tools(mcp)
|
||||
# Access the registered tool's function directly
|
||||
return mcp._tool_manager._tools["file_read"].fn
|
||||
|
||||
|
||||
class TestFileReadTool:
|
||||
"""Tests for file_read tool."""
|
||||
|
||||
def test_read_existing_file(self, file_read_fn, sample_text_file: Path):
|
||||
"""Reading an existing file returns content and metadata."""
|
||||
result = file_read_fn(file_path=str(sample_text_file))
|
||||
|
||||
assert "error" not in result
|
||||
assert result["content"] == "Hello, World!\nLine 2\nLine 3"
|
||||
assert result["name"] == "test.txt"
|
||||
assert result["encoding"] == "utf-8"
|
||||
assert "size" in result
|
||||
|
||||
def test_read_file_not_found(self, file_read_fn, tmp_path: Path):
|
||||
"""Reading a non-existent file returns an error dict."""
|
||||
missing_file = tmp_path / "does_not_exist.txt"
|
||||
|
||||
result = file_read_fn(file_path=str(missing_file))
|
||||
|
||||
assert "error" in result
|
||||
assert "not found" in result["error"].lower()
|
||||
|
||||
def test_read_directory_returns_error(self, file_read_fn, tmp_path: Path):
|
||||
"""Reading a directory (not a file) returns an error."""
|
||||
result = file_read_fn(file_path=str(tmp_path))
|
||||
|
||||
assert "error" in result
|
||||
assert "not a file" in result["error"].lower()
|
||||
|
||||
def test_read_file_too_large(self, file_read_fn, tmp_path: Path):
|
||||
"""Reading a file exceeding max_size returns an error."""
|
||||
large_file = tmp_path / "large.txt"
|
||||
large_file.write_text("x" * 1000)
|
||||
|
||||
result = file_read_fn(file_path=str(large_file), max_size=100)
|
||||
|
||||
assert "error" in result
|
||||
assert "too large" in result["error"].lower()
|
||||
assert "file_size" in result
|
||||
|
||||
def test_read_with_no_size_limit(self, file_read_fn, tmp_path: Path):
|
||||
"""Reading with max_size=0 allows any file size."""
|
||||
large_file = tmp_path / "large.txt"
|
||||
content = "x" * 100_000
|
||||
large_file.write_text(content)
|
||||
|
||||
# max_size=0 means no limit in the implementation
|
||||
result = file_read_fn(file_path=str(large_file), max_size=0)
|
||||
|
||||
assert "error" not in result
|
||||
assert result["content"] == content
|
||||
|
||||
def test_read_with_different_encoding(self, file_read_fn, tmp_path: Path):
|
||||
"""Reading with a specific encoding works."""
|
||||
latin_file = tmp_path / "latin.txt"
|
||||
# Write bytes directly with latin-1 encoding
|
||||
latin_file.write_bytes("café".encode("latin-1"))
|
||||
|
||||
result = file_read_fn(file_path=str(latin_file), encoding="latin-1")
|
||||
|
||||
assert "error" not in result
|
||||
assert result["content"] == "café"
|
||||
assert result["encoding"] == "latin-1"
|
||||
|
||||
def test_read_with_wrong_encoding_returns_error(self, file_read_fn, tmp_path: Path):
|
||||
"""Reading with wrong encoding returns helpful error."""
|
||||
# Create a file with bytes that aren't valid UTF-8
|
||||
binary_file = tmp_path / "binary.txt"
|
||||
binary_file.write_bytes(b"\xff\xfe")
|
||||
|
||||
result = file_read_fn(file_path=str(binary_file), encoding="utf-8")
|
||||
|
||||
assert "error" in result
|
||||
assert "suggestion" in result
|
||||
|
||||
def test_returns_absolute_path(self, file_read_fn, sample_text_file: Path):
|
||||
"""Result includes the absolute path."""
|
||||
result = file_read_fn(file_path=str(sample_text_file))
|
||||
|
||||
assert result["path"] == str(sample_text_file.resolve())
|
||||
@@ -1,99 +0,0 @@
|
||||
"""Tests for file_write tool (FastMCP)."""
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from fastmcp import FastMCP
|
||||
from aden_tools.tools.file_write_tool import register_tools
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def file_write_fn(mcp: FastMCP):
|
||||
"""Register and return the file_write tool function."""
|
||||
register_tools(mcp)
|
||||
return mcp._tool_manager._tools["file_write"].fn
|
||||
|
||||
|
||||
class TestFileWriteTool:
|
||||
"""Tests for file_write tool."""
|
||||
|
||||
def test_write_creates_new_file(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing to a new file creates it with content."""
|
||||
new_file = tmp_path / "new.txt"
|
||||
|
||||
result = file_write_fn(file_path=str(new_file), content="Hello, World!")
|
||||
|
||||
assert "error" not in result
|
||||
assert result["created"] is True
|
||||
assert result["name"] == "new.txt"
|
||||
assert new_file.read_text() == "Hello, World!"
|
||||
|
||||
def test_write_overwrites_existing(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing to existing file overwrites by default."""
|
||||
existing = tmp_path / "existing.txt"
|
||||
existing.write_text("old content")
|
||||
|
||||
result = file_write_fn(file_path=str(existing), content="new content")
|
||||
|
||||
assert "error" not in result
|
||||
assert result["created"] is False
|
||||
assert result["previous_size"] is not None
|
||||
assert existing.read_text() == "new content"
|
||||
|
||||
def test_write_appends_to_existing(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing with mode='append' adds to existing content."""
|
||||
existing = tmp_path / "existing.txt"
|
||||
existing.write_text("line1\n")
|
||||
|
||||
result = file_write_fn(file_path=str(existing), content="line2\n", mode="append")
|
||||
|
||||
assert "error" not in result
|
||||
assert result["mode"] == "append"
|
||||
assert existing.read_text() == "line1\nline2\n"
|
||||
|
||||
def test_write_creates_parent_dirs(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing with create_dirs=True creates missing directories."""
|
||||
deep_path = tmp_path / "nested" / "dirs" / "file.txt"
|
||||
|
||||
result = file_write_fn(file_path=str(deep_path), content="content", create_dirs=True)
|
||||
|
||||
assert "error" not in result
|
||||
assert deep_path.exists()
|
||||
assert deep_path.read_text() == "content"
|
||||
|
||||
def test_write_fails_without_parent_dir(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing with create_dirs=False fails if parent doesn't exist."""
|
||||
missing_dir = tmp_path / "missing" / "file.txt"
|
||||
|
||||
result = file_write_fn(file_path=str(missing_dir), content="content", create_dirs=False)
|
||||
|
||||
assert "error" in result
|
||||
assert "parent directory" in result["error"].lower()
|
||||
|
||||
def test_write_invalid_mode(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing with invalid mode returns error."""
|
||||
result = file_write_fn(
|
||||
file_path=str(tmp_path / "test.txt"),
|
||||
content="content",
|
||||
mode="invalid"
|
||||
)
|
||||
|
||||
assert "error" in result
|
||||
assert "invalid mode" in result["error"].lower()
|
||||
|
||||
def test_write_returns_bytes_written(self, file_write_fn, tmp_path: Path):
|
||||
"""Result includes accurate bytes_written count."""
|
||||
content = "Hello, World!"
|
||||
|
||||
result = file_write_fn(file_path=str(tmp_path / "test.txt"), content=content)
|
||||
|
||||
assert result["bytes_written"] == len(content.encode("utf-8"))
|
||||
|
||||
def test_write_with_encoding(self, file_write_fn, tmp_path: Path):
|
||||
"""Writing with specific encoding works."""
|
||||
file_path = tmp_path / "latin.txt"
|
||||
|
||||
result = file_write_fn(file_path=str(file_path), content="café", encoding="latin-1")
|
||||
|
||||
assert "error" not in result
|
||||
# Verify it was written with latin-1 encoding
|
||||
assert file_path.read_bytes() == "café".encode("latin-1")
|
||||
@@ -2,5 +2,6 @@
|
||||
|
||||
from framework.llm.provider import LLMProvider, LLMResponse
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
|
||||
__all__ = ["LLMProvider", "LLMResponse", "AnthropicProvider"]
|
||||
__all__ = ["LLMProvider", "LLMResponse", "AnthropicProvider", "LiteLLMProvider"]
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
"""Anthropic Claude LLM provider."""
|
||||
"""Anthropic Claude LLM provider - backward compatible wrapper around LiteLLM."""
|
||||
|
||||
import os
|
||||
from typing import Any
|
||||
|
||||
import anthropic
|
||||
|
||||
from framework.llm.provider import LLMProvider, LLMResponse, Tool, ToolUse, ToolResult
|
||||
from framework.llm.provider import LLMProvider, LLMResponse, Tool
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
|
||||
|
||||
class AnthropicProvider(LLMProvider):
|
||||
"""
|
||||
Anthropic Claude LLM provider.
|
||||
|
||||
Uses the Anthropic API to interact with Claude models.
|
||||
This is a backward-compatible wrapper that internally uses LiteLLMProvider.
|
||||
Existing code using AnthropicProvider will continue to work unchanged,
|
||||
while benefiting from LiteLLM's unified interface and features.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@@ -27,14 +27,13 @@ class AnthropicProvider(LLMProvider):
|
||||
api_key: Anthropic API key. If not provided, uses ANTHROPIC_API_KEY env var.
|
||||
model: Model to use (default: claude-haiku-4-5-20251001)
|
||||
"""
|
||||
self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
|
||||
if not self.api_key:
|
||||
raise ValueError(
|
||||
"Anthropic API key required. Set ANTHROPIC_API_KEY env var or pass api_key."
|
||||
)
|
||||
|
||||
# Delegate to LiteLLMProvider internally.
|
||||
self._provider = LiteLLMProvider(
|
||||
model=model,
|
||||
api_key=api_key,
|
||||
)
|
||||
self.model = model
|
||||
self.client = anthropic.Anthropic(api_key=self.api_key)
|
||||
self.api_key = api_key
|
||||
|
||||
def complete(
|
||||
self,
|
||||
@@ -43,34 +42,12 @@ class AnthropicProvider(LLMProvider):
|
||||
tools: list[Tool] | None = None,
|
||||
max_tokens: int = 1024,
|
||||
) -> LLMResponse:
|
||||
"""Generate a completion from Claude."""
|
||||
kwargs: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"max_tokens": max_tokens,
|
||||
"messages": messages,
|
||||
}
|
||||
|
||||
if system:
|
||||
kwargs["system"] = system
|
||||
|
||||
if tools:
|
||||
kwargs["tools"] = [self._tool_to_dict(t) for t in tools]
|
||||
|
||||
response = self.client.messages.create(**kwargs)
|
||||
|
||||
# Extract text content
|
||||
content = ""
|
||||
for block in response.content:
|
||||
if block.type == "text":
|
||||
content += block.text
|
||||
|
||||
return LLMResponse(
|
||||
content=content,
|
||||
model=response.model,
|
||||
input_tokens=response.usage.input_tokens,
|
||||
output_tokens=response.usage.output_tokens,
|
||||
stop_reason=response.stop_reason,
|
||||
raw_response=response,
|
||||
"""Generate a completion from Claude (via LiteLLM)."""
|
||||
return self._provider.complete(
|
||||
messages=messages,
|
||||
system=system,
|
||||
tools=tools,
|
||||
max_tokens=max_tokens,
|
||||
)
|
||||
|
||||
def complete_with_tools(
|
||||
@@ -167,15 +144,3 @@ class AnthropicProvider(LLMProvider):
|
||||
stop_reason="max_iterations",
|
||||
raw_response=None,
|
||||
)
|
||||
|
||||
def _tool_to_dict(self, tool: Tool) -> dict[str, Any]:
|
||||
"""Convert Tool to Anthropic API format."""
|
||||
return {
|
||||
"name": tool.name,
|
||||
"description": tool.description,
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": tool.parameters.get("properties", {}),
|
||||
"required": tool.parameters.get("required", []),
|
||||
},
|
||||
}
|
||||
|
||||
@@ -0,0 +1,248 @@
|
||||
"""LiteLLM provider for pluggable multi-provider LLM support.
|
||||
|
||||
LiteLLM provides a unified, OpenAI-compatible interface that supports
|
||||
multiple LLM providers including OpenAI, Anthropic, Gemini, Mistral,
|
||||
Groq, and local models.
|
||||
|
||||
See: https://docs.litellm.ai/docs/providers
|
||||
"""
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
import litellm
|
||||
|
||||
from framework.llm.provider import LLMProvider, LLMResponse, Tool, ToolUse, ToolResult
|
||||
|
||||
|
||||
class LiteLLMProvider(LLMProvider):
|
||||
"""
|
||||
LiteLLM-based LLM provider for multi-provider support.
|
||||
|
||||
Supports any model that LiteLLM supports, including:
|
||||
- OpenAI: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
|
||||
- Anthropic: claude-3-opus, claude-3-sonnet, claude-3-haiku
|
||||
- Google: gemini-pro, gemini-1.5-pro, gemini-1.5-flash
|
||||
- Mistral: mistral-large, mistral-medium, mistral-small
|
||||
- Groq: llama3-70b, mixtral-8x7b
|
||||
- Local: ollama/llama3, ollama/mistral
|
||||
- And many more...
|
||||
|
||||
Usage:
|
||||
# OpenAI
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini")
|
||||
|
||||
# Anthropic
|
||||
provider = LiteLLMProvider(model="claude-3-haiku-20240307")
|
||||
|
||||
# Google Gemini
|
||||
provider = LiteLLMProvider(model="gemini/gemini-1.5-flash")
|
||||
|
||||
# Local Ollama
|
||||
provider = LiteLLMProvider(model="ollama/llama3")
|
||||
|
||||
# With custom API base
|
||||
provider = LiteLLMProvider(
|
||||
model="gpt-4o-mini",
|
||||
api_base="https://my-proxy.com/v1"
|
||||
)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
model: str = "gpt-4o-mini",
|
||||
api_key: str | None = None,
|
||||
api_base: str | None = None,
|
||||
**kwargs: Any,
|
||||
):
|
||||
"""
|
||||
Initialize the LiteLLM provider.
|
||||
|
||||
Args:
|
||||
model: Model identifier (e.g., "gpt-4o-mini", "claude-3-haiku-20240307")
|
||||
LiteLLM auto-detects the provider from the model name.
|
||||
api_key: API key for the provider. If not provided, LiteLLM will
|
||||
look for the appropriate env var (OPENAI_API_KEY,
|
||||
ANTHROPIC_API_KEY, etc.)
|
||||
api_base: Custom API base URL (for proxies or local deployments)
|
||||
**kwargs: Additional arguments passed to litellm.completion()
|
||||
"""
|
||||
self.model = model
|
||||
self.api_key = api_key
|
||||
self.api_base = api_base
|
||||
self.extra_kwargs = kwargs
|
||||
|
||||
def complete(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
system: str = "",
|
||||
tools: list[Tool] | None = None,
|
||||
max_tokens: int = 1024,
|
||||
) -> LLMResponse:
|
||||
"""Generate a completion using LiteLLM."""
|
||||
# Prepare messages with system prompt
|
||||
full_messages = []
|
||||
if system:
|
||||
full_messages.append({"role": "system", "content": system})
|
||||
full_messages.extend(messages)
|
||||
|
||||
# Build kwargs
|
||||
kwargs: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"messages": full_messages,
|
||||
"max_tokens": max_tokens,
|
||||
**self.extra_kwargs,
|
||||
}
|
||||
|
||||
if self.api_key:
|
||||
kwargs["api_key"] = self.api_key
|
||||
if self.api_base:
|
||||
kwargs["api_base"] = self.api_base
|
||||
|
||||
# Add tools if provided
|
||||
if tools:
|
||||
kwargs["tools"] = [self._tool_to_openai_format(t) for t in tools]
|
||||
|
||||
# Make the call
|
||||
response = litellm.completion(**kwargs)
|
||||
|
||||
# Extract content
|
||||
content = response.choices[0].message.content or ""
|
||||
|
||||
# Get usage info
|
||||
usage = response.usage
|
||||
input_tokens = usage.prompt_tokens if usage else 0
|
||||
output_tokens = usage.completion_tokens if usage else 0
|
||||
|
||||
return LLMResponse(
|
||||
content=content,
|
||||
model=response.model or self.model,
|
||||
input_tokens=input_tokens,
|
||||
output_tokens=output_tokens,
|
||||
stop_reason=response.choices[0].finish_reason or "",
|
||||
raw_response=response,
|
||||
)
|
||||
|
||||
def complete_with_tools(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
system: str,
|
||||
tools: list[Tool],
|
||||
tool_executor: callable,
|
||||
max_iterations: int = 10,
|
||||
) -> LLMResponse:
|
||||
"""Run a tool-use loop until the LLM produces a final response."""
|
||||
# Prepare messages with system prompt
|
||||
current_messages = []
|
||||
if system:
|
||||
current_messages.append({"role": "system", "content": system})
|
||||
current_messages.extend(messages)
|
||||
|
||||
total_input_tokens = 0
|
||||
total_output_tokens = 0
|
||||
|
||||
# Convert tools to OpenAI format
|
||||
openai_tools = [self._tool_to_openai_format(t) for t in tools]
|
||||
|
||||
for _ in range(max_iterations):
|
||||
# Build kwargs
|
||||
kwargs: dict[str, Any] = {
|
||||
"model": self.model,
|
||||
"messages": current_messages,
|
||||
"max_tokens": 1024,
|
||||
"tools": openai_tools,
|
||||
**self.extra_kwargs,
|
||||
}
|
||||
|
||||
if self.api_key:
|
||||
kwargs["api_key"] = self.api_key
|
||||
if self.api_base:
|
||||
kwargs["api_base"] = self.api_base
|
||||
|
||||
response = litellm.completion(**kwargs)
|
||||
|
||||
# Track tokens
|
||||
usage = response.usage
|
||||
if usage:
|
||||
total_input_tokens += usage.prompt_tokens
|
||||
total_output_tokens += usage.completion_tokens
|
||||
|
||||
choice = response.choices[0]
|
||||
message = choice.message
|
||||
|
||||
# Check if we're done (no tool calls)
|
||||
if choice.finish_reason == "stop" or not message.tool_calls:
|
||||
return LLMResponse(
|
||||
content=message.content or "",
|
||||
model=response.model or self.model,
|
||||
input_tokens=total_input_tokens,
|
||||
output_tokens=total_output_tokens,
|
||||
stop_reason=choice.finish_reason or "stop",
|
||||
raw_response=response,
|
||||
)
|
||||
|
||||
# Process tool calls.
|
||||
# Add assistant message with tool calls.
|
||||
current_messages.append({
|
||||
"role": "assistant",
|
||||
"content": message.content,
|
||||
"tool_calls": [
|
||||
{
|
||||
"id": tc.id,
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": tc.function.name,
|
||||
"arguments": tc.function.arguments,
|
||||
},
|
||||
}
|
||||
for tc in message.tool_calls
|
||||
],
|
||||
})
|
||||
|
||||
# Execute tools and add results.
|
||||
for tool_call in message.tool_calls:
|
||||
# Parse arguments
|
||||
try:
|
||||
args = json.loads(tool_call.function.arguments)
|
||||
except json.JSONDecodeError:
|
||||
args = {}
|
||||
|
||||
tool_use = ToolUse(
|
||||
id=tool_call.id,
|
||||
name=tool_call.function.name,
|
||||
input=args,
|
||||
)
|
||||
|
||||
result = tool_executor(tool_use)
|
||||
|
||||
# Add tool result message
|
||||
current_messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": result.tool_use_id,
|
||||
"content": result.content,
|
||||
})
|
||||
|
||||
# Max iterations reached
|
||||
return LLMResponse(
|
||||
content="Max tool iterations reached",
|
||||
model=self.model,
|
||||
input_tokens=total_input_tokens,
|
||||
output_tokens=total_output_tokens,
|
||||
stop_reason="max_iterations",
|
||||
raw_response=None,
|
||||
)
|
||||
|
||||
def _tool_to_openai_format(self, tool: Tool) -> dict[str, Any]:
|
||||
"""Convert Tool to OpenAI function calling format."""
|
||||
return {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": tool.name,
|
||||
"description": tool.description,
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": tool.parameters.get("properties", {}),
|
||||
"required": tool.parameters.get("required", []),
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -4,7 +4,6 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
@@ -71,10 +70,10 @@ class AgentOrchestrator:
|
||||
self._model = model
|
||||
self._message_log: list[AgentMessage] = []
|
||||
|
||||
# Auto-create LLM if API key available
|
||||
if self._llm is None and os.environ.get("ANTHROPIC_API_KEY"):
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
self._llm = AnthropicProvider(model=model)
|
||||
# Auto-create LLM - LiteLLM auto-detects provider and API key from model name
|
||||
if self._llm is None:
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
self._llm = LiteLLMProvider(model=self._model)
|
||||
|
||||
def register(
|
||||
self,
|
||||
|
||||
@@ -12,6 +12,7 @@ from framework.graph.edge import GraphSpec, EdgeSpec, EdgeCondition
|
||||
from framework.graph.node import NodeSpec
|
||||
from framework.graph.executor import GraphExecutor, ExecutionResult
|
||||
from framework.llm.provider import LLMProvider, Tool, ToolResult, ToolUse
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
from framework.runner.tool_registry import ToolRegistry
|
||||
from framework.runtime.core import Runtime
|
||||
|
||||
@@ -183,7 +184,8 @@ class AgentRunner:
|
||||
goal: Loaded Goal object
|
||||
mock_mode: If True, use mock LLM responses
|
||||
storage_path: Path for runtime storage (defaults to temp)
|
||||
model: Anthropic model to use
|
||||
model: Model to use - any LiteLLM-compatible model name
|
||||
(e.g., "claude-sonnet-4-20250514", "gpt-4o-mini", "gemini/gemini-pro")
|
||||
"""
|
||||
self.agent_path = agent_path
|
||||
self.graph = graph
|
||||
|
||||
@@ -7,6 +7,7 @@ requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"pydantic>=2.0",
|
||||
"anthropic>=0.40.0",
|
||||
"litellm>=1.81.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
pydantic>=2.0
|
||||
anthropic>=0.40.0
|
||||
httpx>=0.27.0
|
||||
litellm>=1.81.0
|
||||
|
||||
# MCP server dependencies
|
||||
mcp
|
||||
|
||||
@@ -0,0 +1,332 @@
|
||||
"""Tests for LiteLLM provider.
|
||||
|
||||
Run with:
|
||||
cd core
|
||||
pip install litellm pytest
|
||||
pytest tests/test_litellm_provider.py -v
|
||||
|
||||
For live tests (requires API keys):
|
||||
OPENAI_API_KEY=sk-... pytest tests/test_litellm_provider.py -v -m live
|
||||
"""
|
||||
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import Mock, patch, MagicMock
|
||||
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
from framework.llm.anthropic import AnthropicProvider
|
||||
from framework.llm.provider import LLMProvider, Tool, ToolUse, ToolResult
|
||||
|
||||
|
||||
class TestLiteLLMProviderInit:
|
||||
"""Test LiteLLMProvider initialization."""
|
||||
|
||||
def test_init_with_defaults(self):
|
||||
"""Test initialization with default parameters."""
|
||||
with patch.dict(os.environ, {"OPENAI_API_KEY": "test-key"}):
|
||||
provider = LiteLLMProvider()
|
||||
assert provider.model == "gpt-4o-mini"
|
||||
assert provider.api_key is None
|
||||
assert provider.api_base is None
|
||||
|
||||
def test_init_with_custom_model(self):
|
||||
"""Test initialization with custom model."""
|
||||
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key"}):
|
||||
provider = LiteLLMProvider(model="claude-3-haiku-20240307")
|
||||
assert provider.model == "claude-3-haiku-20240307"
|
||||
|
||||
def test_init_with_api_key(self):
|
||||
"""Test initialization with explicit API key."""
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="my-api-key")
|
||||
assert provider.api_key == "my-api-key"
|
||||
|
||||
def test_init_with_api_base(self):
|
||||
"""Test initialization with custom API base."""
|
||||
provider = LiteLLMProvider(
|
||||
model="gpt-4o-mini",
|
||||
api_key="my-key",
|
||||
api_base="https://my-proxy.com/v1"
|
||||
)
|
||||
assert provider.api_base == "https://my-proxy.com/v1"
|
||||
|
||||
def test_init_ollama_no_key_needed(self):
|
||||
"""Test that Ollama models don't require API key."""
|
||||
with patch.dict(os.environ, {}, clear=True):
|
||||
# Should not raise.
|
||||
provider = LiteLLMProvider(model="ollama/llama3")
|
||||
assert provider.model == "ollama/llama3"
|
||||
|
||||
|
||||
class TestLiteLLMProviderComplete:
|
||||
"""Test LiteLLMProvider.complete() method."""
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_complete_basic(self, mock_completion):
|
||||
"""Test basic completion call."""
|
||||
# Mock response
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = "Hello! I'm an AI assistant."
|
||||
mock_response.choices[0].finish_reason = "stop"
|
||||
mock_response.model = "gpt-4o-mini"
|
||||
mock_response.usage.prompt_tokens = 10
|
||||
mock_response.usage.completion_tokens = 20
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
|
||||
result = provider.complete(
|
||||
messages=[{"role": "user", "content": "Hello"}]
|
||||
)
|
||||
|
||||
assert result.content == "Hello! I'm an AI assistant."
|
||||
assert result.model == "gpt-4o-mini"
|
||||
assert result.input_tokens == 10
|
||||
assert result.output_tokens == 20
|
||||
assert result.stop_reason == "stop"
|
||||
|
||||
# Verify litellm.completion was called correctly
|
||||
mock_completion.assert_called_once()
|
||||
call_kwargs = mock_completion.call_args[1]
|
||||
assert call_kwargs["model"] == "gpt-4o-mini"
|
||||
assert call_kwargs["api_key"] == "test-key"
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_complete_with_system_prompt(self, mock_completion):
|
||||
"""Test completion with system prompt."""
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = "Response"
|
||||
mock_response.choices[0].finish_reason = "stop"
|
||||
mock_response.model = "gpt-4o-mini"
|
||||
mock_response.usage.prompt_tokens = 15
|
||||
mock_response.usage.completion_tokens = 5
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
|
||||
provider.complete(
|
||||
messages=[{"role": "user", "content": "Hello"}],
|
||||
system="You are a helpful assistant."
|
||||
)
|
||||
|
||||
call_kwargs = mock_completion.call_args[1]
|
||||
messages = call_kwargs["messages"]
|
||||
assert messages[0]["role"] == "system"
|
||||
assert messages[0]["content"] == "You are a helpful assistant."
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_complete_with_tools(self, mock_completion):
|
||||
"""Test completion with tools."""
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = "Response"
|
||||
mock_response.choices[0].finish_reason = "stop"
|
||||
mock_response.model = "gpt-4o-mini"
|
||||
mock_response.usage.prompt_tokens = 20
|
||||
mock_response.usage.completion_tokens = 10
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
|
||||
|
||||
tools = [
|
||||
Tool(
|
||||
name="get_weather",
|
||||
description="Get the weather for a location",
|
||||
parameters={
|
||||
"properties": {
|
||||
"location": {"type": "string", "description": "City name"}
|
||||
},
|
||||
"required": ["location"]
|
||||
}
|
||||
)
|
||||
]
|
||||
|
||||
provider.complete(
|
||||
messages=[{"role": "user", "content": "What's the weather?"}],
|
||||
tools=tools
|
||||
)
|
||||
|
||||
call_kwargs = mock_completion.call_args[1]
|
||||
assert "tools" in call_kwargs
|
||||
assert call_kwargs["tools"][0]["type"] == "function"
|
||||
assert call_kwargs["tools"][0]["function"]["name"] == "get_weather"
|
||||
|
||||
|
||||
class TestLiteLLMProviderToolUse:
|
||||
"""Test LiteLLMProvider.complete_with_tools() method."""
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_complete_with_tools_single_iteration(self, mock_completion):
|
||||
"""Test tool use with single iteration."""
|
||||
# First response: tool call
|
||||
tool_call_response = MagicMock()
|
||||
tool_call_response.choices = [MagicMock()]
|
||||
tool_call_response.choices[0].message.content = None
|
||||
tool_call_response.choices[0].message.tool_calls = [MagicMock()]
|
||||
tool_call_response.choices[0].message.tool_calls[0].id = "call_123"
|
||||
tool_call_response.choices[0].message.tool_calls[0].function.name = "get_weather"
|
||||
tool_call_response.choices[0].message.tool_calls[0].function.arguments = '{"location": "London"}'
|
||||
tool_call_response.choices[0].finish_reason = "tool_calls"
|
||||
tool_call_response.model = "gpt-4o-mini"
|
||||
tool_call_response.usage.prompt_tokens = 20
|
||||
tool_call_response.usage.completion_tokens = 15
|
||||
|
||||
# Second response: final answer
|
||||
final_response = MagicMock()
|
||||
final_response.choices = [MagicMock()]
|
||||
final_response.choices[0].message.content = "The weather in London is sunny."
|
||||
final_response.choices[0].message.tool_calls = None
|
||||
final_response.choices[0].finish_reason = "stop"
|
||||
final_response.model = "gpt-4o-mini"
|
||||
final_response.usage.prompt_tokens = 30
|
||||
final_response.usage.completion_tokens = 10
|
||||
|
||||
mock_completion.side_effect = [tool_call_response, final_response]
|
||||
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
|
||||
|
||||
tools = [
|
||||
Tool(
|
||||
name="get_weather",
|
||||
description="Get the weather",
|
||||
parameters={"properties": {"location": {"type": "string"}}, "required": ["location"]}
|
||||
)
|
||||
]
|
||||
|
||||
def tool_executor(tool_use: ToolUse) -> ToolResult:
|
||||
return ToolResult(
|
||||
tool_use_id=tool_use.id,
|
||||
content="Sunny, 22C",
|
||||
is_error=False
|
||||
)
|
||||
|
||||
result = provider.complete_with_tools(
|
||||
messages=[{"role": "user", "content": "What's the weather in London?"}],
|
||||
system="You are a weather assistant.",
|
||||
tools=tools,
|
||||
tool_executor=tool_executor
|
||||
)
|
||||
|
||||
assert result.content == "The weather in London is sunny."
|
||||
assert result.input_tokens == 50 # 20 + 30
|
||||
assert result.output_tokens == 25 # 15 + 10
|
||||
assert mock_completion.call_count == 2
|
||||
|
||||
|
||||
class TestToolConversion:
|
||||
"""Test tool format conversion."""
|
||||
|
||||
def test_tool_to_openai_format(self):
|
||||
"""Test converting Tool to OpenAI format."""
|
||||
provider = LiteLLMProvider(model="gpt-4o-mini", api_key="test-key")
|
||||
|
||||
tool = Tool(
|
||||
name="search",
|
||||
description="Search the web",
|
||||
parameters={
|
||||
"properties": {
|
||||
"query": {"type": "string", "description": "Search query"}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
)
|
||||
|
||||
result = provider._tool_to_openai_format(tool)
|
||||
|
||||
assert result["type"] == "function"
|
||||
assert result["function"]["name"] == "search"
|
||||
assert result["function"]["description"] == "Search the web"
|
||||
assert result["function"]["parameters"]["properties"]["query"]["type"] == "string"
|
||||
assert result["function"]["parameters"]["required"] == ["query"]
|
||||
|
||||
|
||||
class TestAnthropicProviderBackwardCompatibility:
|
||||
"""Test AnthropicProvider backward compatibility with LiteLLM backend."""
|
||||
|
||||
def test_anthropic_provider_is_llm_provider(self):
|
||||
"""Test that AnthropicProvider implements LLMProvider interface."""
|
||||
provider = AnthropicProvider(api_key="test-key")
|
||||
assert isinstance(provider, LLMProvider)
|
||||
|
||||
def test_anthropic_provider_init_defaults(self):
|
||||
"""Test AnthropicProvider initialization with defaults."""
|
||||
provider = AnthropicProvider(api_key="test-key")
|
||||
assert provider.model == "claude-sonnet-4-20250514"
|
||||
assert provider.api_key == "test-key"
|
||||
|
||||
def test_anthropic_provider_init_custom_model(self):
|
||||
"""Test AnthropicProvider initialization with custom model."""
|
||||
provider = AnthropicProvider(api_key="test-key", model="claude-3-haiku-20240307")
|
||||
assert provider.model == "claude-3-haiku-20240307"
|
||||
|
||||
def test_anthropic_provider_uses_litellm_internally(self):
|
||||
"""Test that AnthropicProvider delegates to LiteLLMProvider."""
|
||||
provider = AnthropicProvider(api_key="test-key", model="claude-3-haiku-20240307")
|
||||
assert isinstance(provider._provider, LiteLLMProvider)
|
||||
assert provider._provider.model == "claude-3-haiku-20240307"
|
||||
assert provider._provider.api_key == "test-key"
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_anthropic_provider_complete(self, mock_completion):
|
||||
"""Test AnthropicProvider.complete() delegates to LiteLLM."""
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = "Hello from Claude!"
|
||||
mock_response.choices[0].finish_reason = "stop"
|
||||
mock_response.model = "claude-3-haiku-20240307"
|
||||
mock_response.usage.prompt_tokens = 10
|
||||
mock_response.usage.completion_tokens = 5
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
provider = AnthropicProvider(api_key="test-key", model="claude-3-haiku-20240307")
|
||||
result = provider.complete(
|
||||
messages=[{"role": "user", "content": "Hello"}],
|
||||
system="You are helpful.",
|
||||
max_tokens=100
|
||||
)
|
||||
|
||||
assert result.content == "Hello from Claude!"
|
||||
assert result.model == "claude-3-haiku-20240307"
|
||||
assert result.input_tokens == 10
|
||||
assert result.output_tokens == 5
|
||||
|
||||
mock_completion.assert_called_once()
|
||||
call_kwargs = mock_completion.call_args[1]
|
||||
assert call_kwargs["model"] == "claude-3-haiku-20240307"
|
||||
assert call_kwargs["api_key"] == "test-key"
|
||||
|
||||
@patch("litellm.completion")
|
||||
def test_anthropic_provider_complete_with_tools(self, mock_completion):
|
||||
"""Test AnthropicProvider.complete_with_tools() delegates to LiteLLM."""
|
||||
# Mock a simple response (no tool calls)
|
||||
mock_response = MagicMock()
|
||||
mock_response.choices = [MagicMock()]
|
||||
mock_response.choices[0].message.content = "The time is 3:00 PM."
|
||||
mock_response.choices[0].message.tool_calls = None
|
||||
mock_response.choices[0].finish_reason = "stop"
|
||||
mock_response.model = "claude-3-haiku-20240307"
|
||||
mock_response.usage.prompt_tokens = 20
|
||||
mock_response.usage.completion_tokens = 10
|
||||
mock_completion.return_value = mock_response
|
||||
|
||||
provider = AnthropicProvider(api_key="test-key", model="claude-3-haiku-20240307")
|
||||
|
||||
tools = [
|
||||
Tool(
|
||||
name="get_time",
|
||||
description="Get current time",
|
||||
parameters={"properties": {}, "required": []}
|
||||
)
|
||||
]
|
||||
|
||||
def tool_executor(tool_use: ToolUse) -> ToolResult:
|
||||
return ToolResult(tool_use_id=tool_use.id, content="3:00 PM", is_error=False)
|
||||
|
||||
result = provider.complete_with_tools(
|
||||
messages=[{"role": "user", "content": "What time is it?"}],
|
||||
system="You are a time assistant.",
|
||||
tools=tools,
|
||||
tool_executor=tool_executor
|
||||
)
|
||||
|
||||
assert result.content == "The time is 3:00 PM."
|
||||
mock_completion.assert_called_once()
|
||||
@@ -0,0 +1,82 @@
|
||||
"""Tests for AgentOrchestrator LiteLLM integration.
|
||||
|
||||
Run with:
|
||||
cd core
|
||||
pytest tests/test_orchestrator.py -v
|
||||
"""
|
||||
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
from framework.llm.provider import LLMProvider
|
||||
from framework.llm.litellm import LiteLLMProvider
|
||||
from framework.runner.orchestrator import AgentOrchestrator
|
||||
|
||||
|
||||
class TestOrchestratorLLMInitialization:
|
||||
"""Test AgentOrchestrator LLM provider initialization."""
|
||||
|
||||
def test_auto_creates_litellm_provider_when_no_llm_passed(self):
|
||||
"""Test that LiteLLMProvider is auto-created when no llm is passed."""
|
||||
with patch.object(LiteLLMProvider, '__init__', return_value=None) as mock_init:
|
||||
orchestrator = AgentOrchestrator()
|
||||
|
||||
mock_init.assert_called_once_with(model="claude-sonnet-4-20250514")
|
||||
assert orchestrator._llm is not None
|
||||
|
||||
def test_uses_custom_model_parameter(self):
|
||||
"""Test that custom model parameter is passed to LiteLLMProvider."""
|
||||
with patch.object(LiteLLMProvider, '__init__', return_value=None) as mock_init:
|
||||
orchestrator = AgentOrchestrator(model="gpt-4o")
|
||||
|
||||
mock_init.assert_called_once_with(model="gpt-4o")
|
||||
|
||||
def test_supports_openai_model_names(self):
|
||||
"""Test that OpenAI model names are supported."""
|
||||
with patch.object(LiteLLMProvider, '__init__', return_value=None) as mock_init:
|
||||
orchestrator = AgentOrchestrator(model="gpt-4o-mini")
|
||||
|
||||
mock_init.assert_called_once_with(model="gpt-4o-mini")
|
||||
assert orchestrator._model == "gpt-4o-mini"
|
||||
|
||||
def test_supports_anthropic_model_names(self):
|
||||
"""Test that Anthropic model names are supported."""
|
||||
with patch.object(LiteLLMProvider, '__init__', return_value=None) as mock_init:
|
||||
orchestrator = AgentOrchestrator(model="claude-3-haiku-20240307")
|
||||
|
||||
mock_init.assert_called_once_with(model="claude-3-haiku-20240307")
|
||||
assert orchestrator._model == "claude-3-haiku-20240307"
|
||||
|
||||
def test_skips_auto_creation_when_llm_passed(self):
|
||||
"""Test that auto-creation is skipped when llm is explicitly passed."""
|
||||
mock_llm = Mock(spec=LLMProvider)
|
||||
|
||||
with patch.object(LiteLLMProvider, '__init__', return_value=None) as mock_init:
|
||||
orchestrator = AgentOrchestrator(llm=mock_llm)
|
||||
|
||||
mock_init.assert_not_called()
|
||||
assert orchestrator._llm is mock_llm
|
||||
|
||||
def test_model_attribute_stored_correctly(self):
|
||||
"""Test that _model attribute is stored correctly."""
|
||||
with patch.object(LiteLLMProvider, '__init__', return_value=None):
|
||||
orchestrator = AgentOrchestrator(model="gemini/gemini-1.5-flash")
|
||||
|
||||
assert orchestrator._model == "gemini/gemini-1.5-flash"
|
||||
|
||||
|
||||
class TestOrchestratorLLMProviderType:
|
||||
"""Test that orchestrator uses correct LLM provider type."""
|
||||
|
||||
def test_llm_is_litellm_provider_instance(self):
|
||||
"""Test that auto-created _llm is a LiteLLMProvider instance."""
|
||||
orchestrator = AgentOrchestrator()
|
||||
|
||||
assert isinstance(orchestrator._llm, LiteLLMProvider)
|
||||
|
||||
def test_llm_implements_llm_provider_interface(self):
|
||||
"""Test that _llm implements LLMProvider interface."""
|
||||
orchestrator = AgentOrchestrator()
|
||||
|
||||
assert isinstance(orchestrator._llm, LLMProvider)
|
||||
assert hasattr(orchestrator._llm, 'complete')
|
||||
assert hasattr(orchestrator._llm, 'complete_with_tools')
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 253 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 253 KiB |
@@ -0,0 +1,157 @@
|
||||
# 🚀 Software Development Engineer
|
||||
|
||||
**Location:** San Francisco, CA (Hybrid) or Remote
|
||||
**Type:** Full-time
|
||||
**Team:** Engineering
|
||||
|
||||
---
|
||||
|
||||
## About Aden
|
||||
|
||||
We're building the future of AI agents. Aden is an open-source framework for creating self-improving, production-ready AI agents with built-in cost controls, human-in-the-loop capabilities, and comprehensive observability.
|
||||
|
||||
Our mission: Make AI agents reliable enough for real-world production use.
|
||||
|
||||
---
|
||||
|
||||
## The Role
|
||||
|
||||
We're looking for a Software Development Engineer to help build and scale our AI agent platform. You'll work across the full stack, from our React dashboard to our Node.js backend, contributing to core infrastructure that powers autonomous AI systems.
|
||||
|
||||
This is an opportunity to work on cutting-edge AI infrastructure alongside a small, experienced team passionate about shipping great software.
|
||||
|
||||
---
|
||||
|
||||
## What You'll Do
|
||||
|
||||
- Build and maintain features across our full-stack TypeScript codebase
|
||||
- Design and implement APIs for agent management, monitoring, and control
|
||||
- Work with real-time systems (WebSockets, event streaming)
|
||||
- Optimize database performance (TimescaleDB, MongoDB, Redis)
|
||||
- Contribute to our Model Context Protocol (MCP) server and tooling
|
||||
- Collaborate on architecture decisions for scalability and reliability
|
||||
- Write clean, tested, well-documented code
|
||||
- Participate in code reviews and help maintain code quality
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack
|
||||
|
||||
**Frontend (Honeycomb Dashboard)**
|
||||
- React 18 + TypeScript
|
||||
- Vite
|
||||
- Tailwind CSS + Radix UI
|
||||
- Zustand (state management)
|
||||
- TanStack Query
|
||||
- Recharts + Vega (data visualization)
|
||||
- Socket.io (real-time updates)
|
||||
|
||||
**Backend (Hive)**
|
||||
- Node.js + Express + TypeScript
|
||||
- Socket.io (WebSocket)
|
||||
- Model Context Protocol (MCP)
|
||||
- Zod (validation)
|
||||
- Passport + JWT (authentication)
|
||||
|
||||
**Data Layer**
|
||||
- TimescaleDB (time-series metrics)
|
||||
- MongoDB (policies, configuration)
|
||||
- Redis (caching, pub/sub)
|
||||
|
||||
**Infrastructure**
|
||||
- Docker + Docker Compose
|
||||
- Kubernetes + Kustomize
|
||||
- GitHub Actions (CI/CD)
|
||||
- Nginx
|
||||
|
||||
---
|
||||
|
||||
## What We're Looking For
|
||||
|
||||
**Required:**
|
||||
- 2+ years of professional software development experience
|
||||
- Strong proficiency in TypeScript and Node.js
|
||||
- Experience with React and modern frontend development
|
||||
- Familiarity with SQL and NoSQL databases
|
||||
- Understanding of RESTful APIs and WebSocket communication
|
||||
- Comfortable with Git and collaborative development workflows
|
||||
- Strong problem-solving skills and attention to detail
|
||||
|
||||
**Nice to Have:**
|
||||
- Experience with AI/LLM applications or agent frameworks
|
||||
- Knowledge of time-series databases (TimescaleDB, InfluxDB)
|
||||
- Kubernetes and container orchestration experience
|
||||
- Experience with real-time systems at scale
|
||||
- Contributions to open-source projects
|
||||
- Familiarity with Model Context Protocol (MCP)
|
||||
|
||||
---
|
||||
|
||||
## What We Offer
|
||||
|
||||
- Competitive salary + equity
|
||||
- Health, dental, and vision insurance
|
||||
- Flexible work arrangements (hybrid/remote)
|
||||
- Learning & development budget
|
||||
- Home office setup stipend
|
||||
- Opportunity to work on open-source AI infrastructure
|
||||
- Small team, big impact
|
||||
|
||||
---
|
||||
|
||||
## How to Apply
|
||||
|
||||
**Show us what you can do by contributing to our open-source project:**
|
||||
|
||||
1. **Solve an existing issue**
|
||||
- Browse our [GitHub Issues](https://github.com/adenhq/hive/issues)
|
||||
- Look for issues labeled `good first issue` or `help wanted`
|
||||
- Comment on the issue to claim it
|
||||
- Submit a Pull Request with your solution
|
||||
|
||||
2. **Create new issues**
|
||||
- Found a bug? Report it with clear reproduction steps
|
||||
- Have an idea? Open a feature request with your proposal
|
||||
- Spotted documentation gaps? Suggest improvements
|
||||
- Quality issues that show you understand the codebase stand out
|
||||
|
||||
3. **Submit Pull Requests**
|
||||
- Fix bugs, add features, or improve documentation
|
||||
- Follow our contribution guidelines
|
||||
- Write clear PR descriptions explaining your changes
|
||||
- Respond to code review feedback
|
||||
|
||||
4. **Submit your application:**
|
||||
- Email: `contact@adenhq.com`
|
||||
- Subject: `[SDE] Your Name`
|
||||
- Include:
|
||||
- Resume/CV
|
||||
- GitHub profile
|
||||
- Links to your Issues and/or PRs on our repo
|
||||
- Brief intro about yourself
|
||||
|
||||
5. **What happens next:**
|
||||
- We review your contributions (1-2 weeks)
|
||||
- Technical interview (60 min)
|
||||
- Team interview (45 min)
|
||||
- Offer 🎉
|
||||
|
||||
---
|
||||
|
||||
## Why Join Us?
|
||||
|
||||
- **Impact:** Your code will power AI agents used by developers worldwide
|
||||
- **Open Source:** Everything we build is open source
|
||||
- **Learning:** Work with cutting-edge AI and distributed systems
|
||||
- **Culture:** Small team, low ego, high trust, ship fast
|
||||
- **Growth:** Early-stage company with room to grow
|
||||
|
||||
---
|
||||
|
||||
*Aden is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.*
|
||||
|
||||
---
|
||||
|
||||
**Questions?** Email us at `contact@adenhq.com` or open an issue on [GitHub](https://github.com/adenhq/hive).
|
||||
|
||||
Made with 🔥 Passion in San Francisco
|
||||
@@ -2,6 +2,14 @@
|
||||
|
||||
Welcome to the Aden Engineering Challenges! These quizzes are designed for students and applicants who want to join the Aden team or contribute to our open-source projects.
|
||||
|
||||
---
|
||||
|
||||
## 💼 We're Hiring!
|
||||
|
||||
**[Software Development Engineer](./00-job-post.md)** - Full-stack TypeScript, React, Node.js, AI agents
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Choose your track** based on your interests and skill level
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
node_modules
|
||||
dist
|
||||
.env
|
||||
.env.*
|
||||
*.log
|
||||
.DS_Store
|
||||
.git
|
||||
.vscode
|
||||
.idea
|
||||
+2
-1
@@ -1,5 +1,6 @@
|
||||
# Development Dockerfile with hot reload
|
||||
FROM node:20-alpine
|
||||
# The 'production' alias allows this to work with docker-compose.yml target
|
||||
FROM node:20-alpine AS production
|
||||
|
||||
ARG NPM_TOKEN
|
||||
|
||||
|
||||
@@ -0,0 +1,24 @@
|
||||
/** @type {import('ts-jest').JestConfigWithTsJest} */
|
||||
module.exports = {
|
||||
preset: 'ts-jest',
|
||||
testEnvironment: 'node',
|
||||
roots: ['<rootDir>/tests'],
|
||||
testMatch: ['**/*.test.ts'],
|
||||
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
|
||||
collectCoverageFrom: [
|
||||
'src/**/*.ts',
|
||||
'!src/**/*.d.ts',
|
||||
'!src/index.ts',
|
||||
],
|
||||
coverageDirectory: 'coverage',
|
||||
coverageReporters: ['text', 'lcov', 'html'],
|
||||
setupFilesAfterEnv: ['<rootDir>/tests/setup.ts'],
|
||||
transform: {
|
||||
'^.+\\.tsx?$': ['ts-jest', {
|
||||
tsconfig: 'tsconfig.test.json'
|
||||
}]
|
||||
},
|
||||
testTimeout: 10000,
|
||||
clearMocks: true,
|
||||
restoreMocks: true,
|
||||
};
|
||||
+7
-1
@@ -9,7 +9,9 @@
|
||||
"build": "tsc && npm run build:copy-sql",
|
||||
"build:copy-sql": "find src -name '*.sql' -exec sh -c 'mkdir -p dist/$(dirname ${1#src/}) && cp \"$1\" dist/${1#src/}' _ {} \\;",
|
||||
"start": "node dist/index.js",
|
||||
"test": "jest --passWithNoTests",
|
||||
"test": "jest",
|
||||
"test:watch": "jest --watch",
|
||||
"test:coverage": "jest --coverage",
|
||||
"test:mcp": "ts-node --transpile-only scripts/test-mcp.ts",
|
||||
"test:mcp:quick": "./scripts/test-mcp-curl.sh",
|
||||
"lint": "eslint src/",
|
||||
@@ -41,16 +43,20 @@
|
||||
"@types/compression": "^1.7.5",
|
||||
"@types/cors": "^2.8.17",
|
||||
"@types/express": "^4.17.21",
|
||||
"@types/jest": "^30.0.0",
|
||||
"@types/jsonwebtoken": "^9.0.5",
|
||||
"@types/morgan": "^1.9.9",
|
||||
"@types/node": "^20.10.0",
|
||||
"@types/passport": "^1.0.16",
|
||||
"@types/passport-jwt": "^4.0.1",
|
||||
"@types/pg": "^8.10.9",
|
||||
"@types/supertest": "^6.0.3",
|
||||
"@typescript-eslint/eslint-plugin": "^6.14.0",
|
||||
"@typescript-eslint/parser": "^6.14.0",
|
||||
"eslint": "^8.56.0",
|
||||
"jest": "^29.7.0",
|
||||
"supertest": "^7.2.2",
|
||||
"ts-jest": "^29.4.6",
|
||||
"ts-node": "^10.9.2",
|
||||
"ts-node-dev": "^2.0.0",
|
||||
"typescript": "^5.3.0"
|
||||
|
||||
@@ -103,7 +103,7 @@ router.post(
|
||||
current_team_id: result.current_team_id,
|
||||
create_time: result.created_at,
|
||||
});
|
||||
} catch (err) {
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string; code?: string };
|
||||
console.error("[UserController] login-v2 error:", error.message);
|
||||
|
||||
@@ -219,11 +219,12 @@ router.post("/register", async (req: Request, res: Response) => {
|
||||
current_team_id: result.current_team_id,
|
||||
create_time: result.created_at,
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] register error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string; code?: string };
|
||||
console.error("[UserController] register error:", error.message);
|
||||
|
||||
// Handle specific error codes
|
||||
if (err.code === "EMAIL_EXISTS") {
|
||||
if (error.code === "EMAIL_EXISTS") {
|
||||
return res.status(409).json({
|
||||
success: false,
|
||||
msg: "Email already registered",
|
||||
@@ -278,8 +279,9 @@ router.get("/profile", async (req: Request, res: Response) => {
|
||||
roles: user.roles || ["user"],
|
||||
},
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] /profile error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] /profile error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to get user profile",
|
||||
@@ -321,8 +323,9 @@ router.put("/profile", async (req: Request, res: Response) => {
|
||||
}
|
||||
|
||||
res.json({ message: "Profile updated successfully" });
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] PUT /profile error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] PUT /profile error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to update profile",
|
||||
@@ -368,8 +371,9 @@ router.get("/me", async (req: Request, res: Response) => {
|
||||
avatar_url: user.avatar_url,
|
||||
},
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] /me error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] /me error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to get user info",
|
||||
@@ -409,8 +413,9 @@ router.get("/get-dev-tokens", async (req: Request, res: Response) => {
|
||||
success: true,
|
||||
data: tokens,
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] /get-dev-tokens error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] /get-dev-tokens error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to get API tokens",
|
||||
@@ -460,8 +465,9 @@ router.post("/generate-dev-token", async (req: Request, res: Response) => {
|
||||
success: true,
|
||||
data: tokenResult,
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] /generate-dev-token error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] /generate-dev-token error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to generate API token",
|
||||
@@ -521,8 +527,9 @@ router.get("/settings", async (req: Request, res: Response) => {
|
||||
success: true,
|
||||
data: uiSettings,
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] GET /settings error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] GET /settings error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to get settings",
|
||||
@@ -606,8 +613,9 @@ router.put("/settings", async (req: Request, res: Response) => {
|
||||
success: true,
|
||||
data: uiSettings,
|
||||
});
|
||||
} catch (err: any) {
|
||||
console.error("[UserController] PUT /settings error:", err.message);
|
||||
} catch (err: unknown) {
|
||||
const error = err as { message?: string };
|
||||
console.error("[UserController] PUT /settings error:", error.message);
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
msg: "Failed to update settings",
|
||||
|
||||
@@ -0,0 +1,49 @@
|
||||
/**
|
||||
* Health Endpoint Tests
|
||||
*
|
||||
* Example test file demonstrating how to test API endpoints with supertest.
|
||||
* Use this as a template for writing additional endpoint tests.
|
||||
*/
|
||||
|
||||
import request from 'supertest';
|
||||
import { createFullTestApp, TestAppResult } from '../utils/test-app';
|
||||
|
||||
describe('GET /health', () => {
|
||||
let testApp: TestAppResult;
|
||||
|
||||
beforeEach(async () => {
|
||||
testApp = await createFullTestApp();
|
||||
});
|
||||
|
||||
it('should return 200 OK with correct response schema', async () => {
|
||||
const response = await request(testApp.app)
|
||||
.get('/health')
|
||||
.expect(200)
|
||||
.expect('Content-Type', /application\/json/);
|
||||
|
||||
expect(response.body).toMatchObject({
|
||||
status: 'ok',
|
||||
service: 'aden-hive',
|
||||
timestamp: expect.any(String),
|
||||
userDbType: 'postgres',
|
||||
});
|
||||
});
|
||||
|
||||
it('should not require authentication', async () => {
|
||||
const response = await request(testApp.app)
|
||||
.get('/health')
|
||||
.expect(200);
|
||||
|
||||
expect(response.body.status).toBe('ok');
|
||||
});
|
||||
|
||||
it('should reflect database type configuration', async () => {
|
||||
const mysqlApp = await createFullTestApp({ dbType: 'mysql' });
|
||||
|
||||
const response = await request(mysqlApp.app)
|
||||
.get('/health')
|
||||
.expect(200);
|
||||
|
||||
expect(response.body.userDbType).toBe('mysql');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,29 @@
|
||||
/**
|
||||
* Jest Global Setup
|
||||
*
|
||||
* Configures environment variables and global mocks before all tests.
|
||||
*/
|
||||
|
||||
import { clearGlobalMongoMocks } from './utils/db-mocks';
|
||||
import { cleanupPassportStrategies } from './utils/test-app';
|
||||
|
||||
// Set test environment variables before any imports
|
||||
process.env.NODE_ENV = 'test';
|
||||
process.env.PORT = '4001';
|
||||
process.env.JWT_SECRET = 'test-jwt-secret-for-testing-only';
|
||||
process.env.JWT_EXPIRES_IN = '1h';
|
||||
|
||||
// Cleanup after each test to prevent state leakage
|
||||
afterEach(() => {
|
||||
jest.clearAllMocks();
|
||||
clearGlobalMongoMocks();
|
||||
cleanupPassportStrategies();
|
||||
});
|
||||
|
||||
// Final cleanup after all tests complete
|
||||
afterAll(() => {
|
||||
clearGlobalMongoMocks();
|
||||
cleanupPassportStrategies();
|
||||
});
|
||||
|
||||
export {};
|
||||
@@ -0,0 +1,124 @@
|
||||
/**
|
||||
* Authentication Mock Utilities
|
||||
*
|
||||
* Provides utilities for mocking JWT authentication and user context in tests.
|
||||
*/
|
||||
|
||||
import jwt from 'jsonwebtoken';
|
||||
|
||||
const TEST_JWT_SECRET_FALLBACK = 'test-jwt-secret-for-testing-only';
|
||||
|
||||
function getTestJwtSecret(): string {
|
||||
return process.env.JWT_SECRET || TEST_JWT_SECRET_FALLBACK;
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// User Factory
|
||||
// =============================================================================
|
||||
|
||||
export interface MockUser {
|
||||
id: number;
|
||||
email: string;
|
||||
current_team_id: number;
|
||||
firstname?: string;
|
||||
lastname?: string;
|
||||
name?: string;
|
||||
roles?: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock user with sensible defaults
|
||||
*/
|
||||
export function createMockUser(overrides: Partial<MockUser> = {}): MockUser {
|
||||
return {
|
||||
id: 1,
|
||||
email: 'test@example.com',
|
||||
current_team_id: 1,
|
||||
firstname: 'Test',
|
||||
lastname: 'User',
|
||||
name: 'Test User',
|
||||
roles: ['user'],
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// JWT Token Generation
|
||||
// =============================================================================
|
||||
|
||||
export interface TokenPayload {
|
||||
id: number;
|
||||
email: string;
|
||||
current_team_id: number;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a valid JWT token for testing
|
||||
*/
|
||||
export function generateTestToken(
|
||||
payload: Partial<TokenPayload> = {},
|
||||
options: { expiresIn?: number; secret?: string } = {}
|
||||
): string {
|
||||
const { expiresIn = 3600, secret = getTestJwtSecret() } = options;
|
||||
|
||||
const defaultPayload: TokenPayload = {
|
||||
id: 1,
|
||||
email: 'test@example.com',
|
||||
current_team_id: 1,
|
||||
...payload,
|
||||
};
|
||||
|
||||
return jwt.sign(defaultPayload, secret, { expiresIn } as jwt.SignOptions);
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// Mock User Database Service
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Minimal MockUserDbService interface for testing
|
||||
*/
|
||||
export interface MockUserDbService {
|
||||
findByToken: jest.Mock;
|
||||
login: jest.Mock;
|
||||
dbType: 'postgres' | 'mysql';
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock userDbService for testing
|
||||
*/
|
||||
export function createMockUserDbService(
|
||||
user: MockUser = createMockUser(),
|
||||
options: { dbType?: 'postgres' | 'mysql' } = {}
|
||||
): MockUserDbService {
|
||||
const { dbType = 'postgres' } = options;
|
||||
|
||||
return {
|
||||
findByToken: jest.fn().mockResolvedValue(user),
|
||||
login: jest.fn().mockResolvedValue({
|
||||
token: generateTestToken({ id: user.id, email: user.email, current_team_id: user.current_team_id }),
|
||||
email: user.email,
|
||||
firstname: user.firstname,
|
||||
lastname: user.lastname,
|
||||
name: user.name,
|
||||
current_team_id: user.current_team_id,
|
||||
created_at: new Date(),
|
||||
}),
|
||||
dbType,
|
||||
};
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// Request Headers Helper
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Create authorization header object for supertest
|
||||
*/
|
||||
export function authHeader(token?: string): Record<string, string> {
|
||||
const finalToken = token || generateTestToken();
|
||||
return {
|
||||
Authorization: `Bearer ${finalToken}`,
|
||||
};
|
||||
}
|
||||
@@ -0,0 +1,142 @@
|
||||
/**
|
||||
* Database Mock Utilities
|
||||
*
|
||||
* Provides mock factories for PostgreSQL and MongoDB connections.
|
||||
* Use these to create isolated test environments without real database connections.
|
||||
*/
|
||||
|
||||
import { QueryResult } from 'pg';
|
||||
|
||||
// =============================================================================
|
||||
// PostgreSQL Mocks
|
||||
// =============================================================================
|
||||
|
||||
export interface MockQueryResult<T = Record<string, unknown>> extends Partial<QueryResult<T>> {
|
||||
rows: T[];
|
||||
rowCount?: number;
|
||||
}
|
||||
|
||||
export interface MockPoolClient {
|
||||
query: jest.Mock;
|
||||
release: jest.Mock;
|
||||
}
|
||||
|
||||
export interface MockPool {
|
||||
connect: jest.Mock<Promise<MockPoolClient>>;
|
||||
query: jest.Mock;
|
||||
end: jest.Mock;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock PostgreSQL pool client
|
||||
*/
|
||||
export function createMockPoolClient(defaultRows: unknown[] = []): MockPoolClient {
|
||||
return {
|
||||
query: jest.fn().mockResolvedValue({ rows: defaultRows, rowCount: defaultRows.length }),
|
||||
release: jest.fn(),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock PostgreSQL pool
|
||||
*/
|
||||
export function createMockPool(defaultRows: unknown[] = []): MockPool {
|
||||
return {
|
||||
connect: jest.fn().mockImplementation(() => {
|
||||
return Promise.resolve(createMockPoolClient(defaultRows));
|
||||
}),
|
||||
query: jest.fn().mockResolvedValue({ rows: defaultRows, rowCount: defaultRows.length }),
|
||||
end: jest.fn().mockResolvedValue(undefined),
|
||||
};
|
||||
}
|
||||
|
||||
// =============================================================================
|
||||
// MongoDB Mocks
|
||||
// =============================================================================
|
||||
|
||||
export interface MockCollection {
|
||||
find: jest.Mock;
|
||||
findOne: jest.Mock;
|
||||
insertOne: jest.Mock;
|
||||
updateOne: jest.Mock;
|
||||
deleteOne: jest.Mock;
|
||||
}
|
||||
|
||||
export interface MockDb {
|
||||
collection: jest.Mock<MockCollection>;
|
||||
}
|
||||
|
||||
export interface MockMongoClient {
|
||||
connect: jest.Mock;
|
||||
db: jest.Mock<MockDb>;
|
||||
close: jest.Mock;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock MongoDB collection
|
||||
*/
|
||||
export function createMockCollection(defaultDocs: unknown[] = []): MockCollection {
|
||||
const cursor = {
|
||||
toArray: jest.fn().mockResolvedValue(defaultDocs),
|
||||
};
|
||||
|
||||
return {
|
||||
find: jest.fn().mockReturnValue(cursor),
|
||||
findOne: jest.fn().mockResolvedValue(defaultDocs[0] || null),
|
||||
insertOne: jest.fn().mockResolvedValue({ insertedId: 'mock-id' }),
|
||||
updateOne: jest.fn().mockResolvedValue({ matchedCount: 1, modifiedCount: 1 }),
|
||||
deleteOne: jest.fn().mockResolvedValue({ deletedCount: 1 }),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock MongoDB database
|
||||
*/
|
||||
export function createMockDb(collections: Record<string, MockCollection> = {}): MockDb {
|
||||
return {
|
||||
collection: jest.fn().mockImplementation((name: string) => {
|
||||
return collections[name] || createMockCollection();
|
||||
}),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a mock MongoDB client
|
||||
*/
|
||||
export function createMockMongoClient(dbs: Record<string, MockDb> = {}): MockMongoClient {
|
||||
return {
|
||||
connect: jest.fn().mockResolvedValue(undefined),
|
||||
db: jest.fn().mockImplementation((name: string) => {
|
||||
return dbs[name] || createMockDb();
|
||||
}),
|
||||
close: jest.fn().mockResolvedValue(undefined),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup global MongoDB mocks (for services that use global._ACHO_MG_DB)
|
||||
*/
|
||||
export function setupGlobalMongoMocks(collections: Record<string, MockCollection> = {}): void {
|
||||
const mockDb = createMockDb(collections);
|
||||
const mockClient = createMockMongoClient({ erp: mockDb, aden: mockDb });
|
||||
|
||||
(global as Record<string, unknown>)._ACHO_MG_DB = mockClient;
|
||||
(global as Record<string, unknown>)._ACHO_MDB_CONFIG = {
|
||||
ERP_DBNAME: 'erp',
|
||||
DBNAME: 'aden',
|
||||
};
|
||||
(global as Record<string, unknown>)._ACHO_MDB_COLLECTIONS = {
|
||||
ADEN_CONTROL_POLICIES: 'aden_control_policies',
|
||||
ADEN_CONTROL_CONTENT: 'aden_control_content',
|
||||
LLM_PRICING: 'llm_pricing',
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear global MongoDB mocks
|
||||
*/
|
||||
export function clearGlobalMongoMocks(): void {
|
||||
delete (global as Record<string, unknown>)._ACHO_MG_DB;
|
||||
delete (global as Record<string, unknown>)._ACHO_MDB_CONFIG;
|
||||
delete (global as Record<string, unknown>)._ACHO_MDB_COLLECTIONS;
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
/**
|
||||
* Test Utilities Index
|
||||
*
|
||||
* Re-exports all test utilities for convenient importing.
|
||||
*/
|
||||
|
||||
export * from './db-mocks';
|
||||
export * from './auth-mocks';
|
||||
export * from './test-app';
|
||||
@@ -0,0 +1,118 @@
|
||||
/**
|
||||
* Test Application Factory
|
||||
*
|
||||
* Creates isolated Express app instances for testing with mocked dependencies.
|
||||
*/
|
||||
|
||||
import express, { Express, Request, Response, NextFunction } from 'express';
|
||||
import cors from 'cors';
|
||||
import passport from 'passport';
|
||||
import { Strategy as JwtStrategy, ExtractJwt } from 'passport-jwt';
|
||||
import { createMockPool, setupGlobalMongoMocks, MockPool } from './db-mocks';
|
||||
import { createMockUser, createMockUserDbService, MockUser, MockUserDbService } from './auth-mocks';
|
||||
|
||||
const TEST_JWT_SECRET_FALLBACK = 'test-jwt-secret-for-testing-only';
|
||||
|
||||
function getTestJwtSecret(): string {
|
||||
return process.env.JWT_SECRET || TEST_JWT_SECRET_FALLBACK;
|
||||
}
|
||||
|
||||
const TEST_JWT_STRATEGY_NAME = 'jwt';
|
||||
|
||||
/**
|
||||
* Cleanup Passport strategies registered by test apps.
|
||||
* Call this in afterEach to prevent strategy accumulation across tests.
|
||||
*/
|
||||
export function cleanupPassportStrategies(): void {
|
||||
try {
|
||||
passport.unuse(TEST_JWT_STRATEGY_NAME);
|
||||
} catch {
|
||||
// Strategy not found - that's fine
|
||||
}
|
||||
}
|
||||
|
||||
export interface TestAppOptions {
|
||||
user?: MockUser;
|
||||
mockPool?: MockPool;
|
||||
dbType?: 'postgres' | 'mysql';
|
||||
}
|
||||
|
||||
export interface TestAppResult {
|
||||
app: Express;
|
||||
mockPool: MockPool;
|
||||
mockUserDbService: MockUserDbService;
|
||||
mockUser: MockUser;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a test application with routes mounted
|
||||
*
|
||||
* This creates a fresh Express app with mocked database connections,
|
||||
* authentication, and real routes for integration testing.
|
||||
*/
|
||||
export async function createFullTestApp(options: TestAppOptions = {}): Promise<TestAppResult> {
|
||||
const {
|
||||
user = createMockUser(),
|
||||
mockPool = createMockPool(),
|
||||
dbType = 'postgres',
|
||||
} = options;
|
||||
|
||||
const app = express();
|
||||
|
||||
// Middleware (match production order)
|
||||
app.use(cors());
|
||||
app.use(express.json({ limit: '10mb' }));
|
||||
app.use(express.urlencoded({ extended: true }));
|
||||
app.disable('x-powered-by');
|
||||
|
||||
// Setup mock user database service
|
||||
const mockUserDbService = createMockUserDbService(user, { dbType });
|
||||
app.locals.userDbService = mockUserDbService;
|
||||
app.locals.pgPool = mockPool;
|
||||
|
||||
// Setup Passport JWT authentication
|
||||
passport.use(new JwtStrategy({
|
||||
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
|
||||
secretOrKey: getTestJwtSecret(),
|
||||
}, (payload, done) => {
|
||||
done(null, payload);
|
||||
}));
|
||||
app.use(passport.initialize());
|
||||
|
||||
// Setup global MongoDB mocks
|
||||
setupGlobalMongoMocks();
|
||||
|
||||
// Health check endpoint
|
||||
app.get('/health', (req: Request, res: Response) => {
|
||||
res.json({
|
||||
status: 'ok',
|
||||
service: 'aden-hive',
|
||||
timestamp: new Date().toISOString(),
|
||||
userDbType: dbType,
|
||||
});
|
||||
});
|
||||
|
||||
// 404 handler
|
||||
app.use((req: Request, res: Response) => {
|
||||
res.status(404).json({
|
||||
error: 'not_found',
|
||||
message: `Route ${req.method} ${req.path} not found`,
|
||||
});
|
||||
});
|
||||
|
||||
// Error handler
|
||||
app.use((err: Error & { status?: number }, req: Request, res: Response, _next: NextFunction) => {
|
||||
const status = err.status || 500;
|
||||
res.status(status).json({
|
||||
error: err.name || 'Error',
|
||||
message: err.message || 'An unexpected error occurred',
|
||||
});
|
||||
});
|
||||
|
||||
return {
|
||||
app,
|
||||
mockPool,
|
||||
mockUserDbService,
|
||||
mockUser: user,
|
||||
};
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"extends": "./tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"rootDir": ".",
|
||||
"types": ["jest", "node"],
|
||||
"typeRoots": ["./node_modules/@types", "../node_modules/@types", "./src/types"]
|
||||
},
|
||||
"include": ["src/**/*", "tests/**/*"],
|
||||
"exclude": ["node_modules", "dist"]
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
node_modules
|
||||
dist
|
||||
.env
|
||||
.env.*
|
||||
*.log
|
||||
.DS_Store
|
||||
.git
|
||||
.vscode
|
||||
.idea
|
||||
@@ -1,5 +1,6 @@
|
||||
# Development Dockerfile with hot reload
|
||||
FROM node:20-alpine
|
||||
# The 'production' alias allows this to work with docker-compose.yml target
|
||||
FROM node:20-alpine AS production
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
|
||||
@@ -50,19 +50,24 @@
|
||||
"zustand": "^5.0.10"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@testing-library/jest-dom": "^6.4.2",
|
||||
"@testing-library/react": "^14.2.1",
|
||||
"@testing-library/user-event": "^14.5.2",
|
||||
"@types/react": "^18.2.43",
|
||||
"@types/react-dom": "^18.2.17",
|
||||
"@typescript-eslint/eslint-plugin": "^6.14.0",
|
||||
"@typescript-eslint/parser": "^6.14.0",
|
||||
"@vitejs/plugin-react": "^4.2.1",
|
||||
"@types/node": "^20.10.0",
|
||||
"autoprefixer": "^10.4.23",
|
||||
"eslint": "^8.55.0",
|
||||
"eslint-plugin-react-hooks": "^4.6.0",
|
||||
"eslint-plugin-react-refresh": "^0.4.5",
|
||||
"jsdom": "^24.0.0",
|
||||
"postcss": "^8.5.6",
|
||||
"tailwindcss": "^3.4.19",
|
||||
"typescript": "^5.3.0",
|
||||
"vite": "^5.0.8",
|
||||
"vitest": "^1.1.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -8,6 +8,7 @@ import {
|
||||
ResponsiveContainer,
|
||||
ReferenceLine,
|
||||
} from 'recharts'
|
||||
import { ReactNode } from 'react'
|
||||
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
|
||||
import type { CostTrendData } from '@/types/agentControl'
|
||||
|
||||
@@ -35,8 +36,9 @@ export function CostTrendChart({
|
||||
maximumFractionDigits: 0,
|
||||
}).format(value)
|
||||
|
||||
const formatDate = (dateStr: string) => {
|
||||
const date = new Date(dateStr)
|
||||
const formatDate = (label: ReactNode) => {
|
||||
if (typeof label !== 'string') return String(label || '')
|
||||
const date = new Date(label)
|
||||
return date.toLocaleDateString(undefined, { month: 'short', day: 'numeric' })
|
||||
}
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ import {
|
||||
ResponsiveContainer,
|
||||
Legend,
|
||||
} from 'recharts'
|
||||
import { ReactNode } from 'react'
|
||||
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
|
||||
import type { TokenUsageData } from '@/types/agentControl'
|
||||
|
||||
@@ -31,8 +32,9 @@ export function TokenUsageChart({
|
||||
return value.toString()
|
||||
}
|
||||
|
||||
const formatDate = (dateStr: string) => {
|
||||
const date = new Date(dateStr)
|
||||
const formatDate = (label: ReactNode) => {
|
||||
if (typeof label !== 'string') return String(label || '')
|
||||
const date = new Date(label)
|
||||
return date.toLocaleDateString(undefined, { month: 'short', day: 'numeric' })
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
import { render, screen } from '@testing-library/react'
|
||||
import { describe, it, expect } from 'vitest'
|
||||
import { LiveIndicator } from './LiveIndicator'
|
||||
|
||||
describe('LiveIndicator', () => {
|
||||
it('renders "Live" text when isLive is true (default)', () => {
|
||||
render(<LiveIndicator />)
|
||||
|
||||
expect(screen.getByText('Live')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('renders the pulsing indicator dot', () => {
|
||||
const { container } = render(<LiveIndicator />)
|
||||
|
||||
const dot = container.querySelector('.bg-green-500')
|
||||
expect(dot).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('returns null when isLive is false', () => {
|
||||
const { container } = render(<LiveIndicator isLive={false} />)
|
||||
|
||||
expect(container.firstChild).toBeNull()
|
||||
})
|
||||
|
||||
it('applies custom className', () => {
|
||||
const { container } = render(<LiveIndicator className="custom-class" />)
|
||||
|
||||
const wrapper = container.firstChild as HTMLElement
|
||||
expect(wrapper).toHaveClass('custom-class')
|
||||
})
|
||||
})
|
||||
@@ -2,6 +2,9 @@ import { useState, useRef, useCallback, useEffect } from 'react'
|
||||
import type { AgentStatus } from '@/types/agentControl'
|
||||
|
||||
const HIVE_URL = import.meta.env.VITE_API_URL || 'http://localhost:4000'
|
||||
// Delay before attempting to reconnect after SSE stream disconnection or error
|
||||
// 5 seconds provides a reasonable balance between responsiveness and avoiding
|
||||
// rapid retry loops.
|
||||
const RECONNECT_DELAY_MS = 5000
|
||||
|
||||
interface UseAgentStatusOptions {
|
||||
|
||||
@@ -66,6 +66,14 @@ export function getAnalyticsNarrow(): Promise<RawJsonData> {
|
||||
// Logs Endpoints
|
||||
// =============================================================================
|
||||
|
||||
// Default pagination limits for log queries.
|
||||
|
||||
// Higher limit for raw logs - balances data completeness with response size.
|
||||
const DEFAULT_LOGS_LIMIT = 500
|
||||
|
||||
// Lower limit for grouped results - typically fewer unique groups needed.
|
||||
const DEFAULT_AGGREGATED_LOGS_LIMIT = 100
|
||||
|
||||
/**
|
||||
* Get raw logs for a time range
|
||||
* @param start - ISO date string
|
||||
@@ -77,7 +85,7 @@ export function getAnalyticsNarrow(): Promise<RawJsonData> {
|
||||
export function getLogs(
|
||||
start: string,
|
||||
end: string,
|
||||
limit = 500,
|
||||
limit = DEFAULT_LOGS_LIMIT,
|
||||
offset = 0,
|
||||
filters?: { type?: string; success?: string }
|
||||
): Promise<RawJsonData> {
|
||||
@@ -103,7 +111,7 @@ export function getLogsAggregated(
|
||||
start: string,
|
||||
end: string,
|
||||
groupBy: string,
|
||||
limit = 100
|
||||
limit = DEFAULT_AGGREGATED_LOGS_LIMIT
|
||||
): Promise<RawJsonData> {
|
||||
return hiveClient.get(
|
||||
`/tsdb/logs?start=${encodeURIComponent(start)}&end=${encodeURIComponent(end)}&group_by=${groupBy}&limit=${limit}`
|
||||
|
||||
@@ -1,6 +1,13 @@
|
||||
// In the honeycomb monorepo, hive handles all endpoints (auth, user, IAM, and agent control)
|
||||
/**
|
||||
* API Client Service
|
||||
*
|
||||
* Generic HTTP client for all hive endpoints (auth, user, IAM, and agent control).
|
||||
* Handles authentication tokens from localStorage and standard CRUD operations.
|
||||
*/
|
||||
|
||||
const API_URL = import.meta.env.VITE_API_URL || ''
|
||||
|
||||
|
||||
export class ApiError extends Error {
|
||||
constructor(
|
||||
public status: number,
|
||||
@@ -37,6 +44,13 @@ class ApiClient {
|
||||
return headers
|
||||
}
|
||||
|
||||
/**
|
||||
* Performs a GET request to the specified endpoint.
|
||||
* @template T - Expected response type
|
||||
* @param endpoint - API endpoint path (e.g., '/user/profile')
|
||||
* @returns Promise resolving to the parsed JSON response
|
||||
* @throws {ApiError} When the response status is not ok (non-2xx)
|
||||
*/
|
||||
async get<T>(endpoint: string): Promise<T> {
|
||||
const response = await fetch(`${this.baseUrl}${endpoint}`, {
|
||||
method: 'GET',
|
||||
@@ -50,6 +64,14 @@ class ApiClient {
|
||||
return response.json()
|
||||
}
|
||||
|
||||
/**
|
||||
* Performs a POST request to the specified endpoint.
|
||||
* @template T - Expected response type
|
||||
* @param endpoint - API endpoint path
|
||||
* @param data - Optional request body (will be JSON stringified)
|
||||
* @returns Promise resolving to the parsed JSON response
|
||||
* @throws {ApiError} When the response status is not ok (non-2xx)
|
||||
*/
|
||||
async post<T>(endpoint: string, data?: unknown): Promise<T> {
|
||||
const response = await fetch(`${this.baseUrl}${endpoint}`, {
|
||||
method: 'POST',
|
||||
@@ -64,6 +86,14 @@ class ApiClient {
|
||||
return response.json()
|
||||
}
|
||||
|
||||
/**
|
||||
* Performs a PUT request to the specified endpoint.
|
||||
* @template T - Expected response type
|
||||
* @param endpoint - API endpoint path
|
||||
* @param data - Request body (will be JSON stringified)
|
||||
* @returns Promise resolving to the parsed JSON response
|
||||
* @throws {ApiError} When the response status is not ok (non-2xx)
|
||||
*/
|
||||
async put<T>(endpoint: string, data: unknown): Promise<T> {
|
||||
const response = await fetch(`${this.baseUrl}${endpoint}`, {
|
||||
method: 'PUT',
|
||||
@@ -78,6 +108,13 @@ class ApiClient {
|
||||
return response.json()
|
||||
}
|
||||
|
||||
/**
|
||||
* Performs a DELETE request to the specified endpoint.
|
||||
* @template T - Expected response type
|
||||
* @param endpoint - API endpoint path
|
||||
* @returns Promise resolving to the parsed JSON response
|
||||
* @throws {ApiError} When the response status is not ok (non-2xx)
|
||||
*/
|
||||
async delete<T>(endpoint: string): Promise<T> {
|
||||
const response = await fetch(`${this.baseUrl}${endpoint}`, {
|
||||
method: 'DELETE',
|
||||
@@ -92,9 +129,11 @@ class ApiClient {
|
||||
}
|
||||
}
|
||||
|
||||
// Main API client for all hive endpoints
|
||||
/** Main API client instance for all hive endpoints. */
|
||||
export const apiClient = new ApiClient(API_URL)
|
||||
|
||||
// Aliases for compatibility with existing code
|
||||
/** @deprecated Use apiClient instead. Alias for backward compatibility. */
|
||||
export const serverClient = apiClient
|
||||
|
||||
/** @deprecated Use apiClient instead. Alias for backward compatibility. */
|
||||
export const hiveClient = apiClient
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
/**
|
||||
* Authentication API Service
|
||||
*/
|
||||
|
||||
import { serverClient } from './api'
|
||||
import type {
|
||||
LoginCredentials,
|
||||
@@ -7,11 +11,50 @@ import type {
|
||||
RegisterResponse,
|
||||
} from '@/types/auth'
|
||||
|
||||
/**
|
||||
* Authenticates a user with email and password.
|
||||
*
|
||||
* @param credentials - User login credentials
|
||||
* @returns Promise resolving to login response with token and mustResetPassword
|
||||
* @throws {ApiError} When credentials are invalid (401) or other server errors
|
||||
*
|
||||
* @example
|
||||
* submitLogin({
|
||||
* email: "john.doe@example.com",
|
||||
* password: "StrongPass123",
|
||||
* grantToken: "optional-grant-token"
|
||||
* })
|
||||
*/
|
||||
export const submitLogin = (credentials: LoginCredentials): Promise<LoginResponse> =>
|
||||
serverClient.post<LoginResponse>('/user/login-v2', credentials)
|
||||
|
||||
/**
|
||||
* Retrieves organization information by its URL path.
|
||||
* Used during login to display organization branding and validate org existence.
|
||||
* @param orgPath - Organization's URL path identifier (e.g., 'acme-corp')
|
||||
* @returns Promise resolving to organization info
|
||||
* @throws {ApiError} When organization is not found (404)
|
||||
*
|
||||
* @example
|
||||
* getOrgInfoByPath('acme-corp')
|
||||
*/
|
||||
export const getOrgInfoByPath = (orgPath: string): Promise<{ data: OrgInfo }> =>
|
||||
serverClient.get<{ data: OrgInfo }>(`/iam/org/info/${orgPath}`)
|
||||
|
||||
/**
|
||||
* Registers a new user account.
|
||||
*
|
||||
* @param credentials User registration payload
|
||||
* @returns {Promise<RegisterResponse>} Registration response
|
||||
* @throws {ApiError} When email is already taken (409) or validation fails (400)
|
||||
*
|
||||
* @example
|
||||
* submitRegister({
|
||||
* email: "john.doe@example.com",
|
||||
* password: "StrongPass123",
|
||||
* firstname: "John",
|
||||
* lastname: "Doe"
|
||||
* })
|
||||
*/
|
||||
export const submitRegister = (credentials: RegisterCredentials): Promise<RegisterResponse> =>
|
||||
serverClient.post<RegisterResponse>('/user/register', credentials)
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
/**
|
||||
* Organization API Service
|
||||
*/
|
||||
|
||||
import { serverClient } from './api'
|
||||
import type {
|
||||
Organization,
|
||||
@@ -6,20 +10,64 @@ import type {
|
||||
UpdateOrgNamePayload,
|
||||
} from '@/types/user'
|
||||
|
||||
// Organization Management
|
||||
/** Organization Management */
|
||||
|
||||
/**
|
||||
* Retrieves the current team/organization context for the authenticated user.
|
||||
* @returns Promise resolving to current team details
|
||||
* @throws {ApiError} When not authenticated (401)
|
||||
*
|
||||
* @example
|
||||
* getCurrentTeam()
|
||||
*/
|
||||
export const getCurrentTeam = () =>
|
||||
serverClient.get<OrganizationResponse>('/iam/get-current-team')
|
||||
|
||||
/**
|
||||
* Updates the organization's logo image.
|
||||
* @param payload - update payload containing orgId and new logo image (base64 string)
|
||||
* @returns Promise resolving to success message
|
||||
* @throws {ApiError} When image format is invalid (400) or no admin access (403)
|
||||
*
|
||||
* @example
|
||||
* setOrganizationLogo({ orgId: 1, orgLogo: 'base64-encoded-image' })
|
||||
*/
|
||||
export const setOrganizationLogo = (payload: UpdateOrgLogoPayload) =>
|
||||
serverClient.post<{ message: string }>('/iam/set-organization-logo', payload)
|
||||
|
||||
/**
|
||||
* Renames the organization.
|
||||
* @param payload - update payload containing orgId and new name
|
||||
* @returns Promise resolving to success message
|
||||
* @throws {ApiError} When name is invalid (400) or no admin access (403)
|
||||
*
|
||||
* @example
|
||||
* updateOrgName({ name: 'New Organization Name', orgId: 1 })
|
||||
*/
|
||||
export const updateOrgName = (payload: UpdateOrgNamePayload) =>
|
||||
serverClient.post<{ message: string }>('/iam/org/rename', payload)
|
||||
|
||||
// Fetch all organizations user belongs to
|
||||
/**
|
||||
* Retrieves all organizations the current user belongs to.
|
||||
* Used to populate the organization switcher.
|
||||
* @returns Promise resolving to array of organization details including orgName, orgId, teamId, and teamName
|
||||
* @throws {ApiError} When not authenticated (401)
|
||||
*
|
||||
* @example
|
||||
* await getOrganizations()
|
||||
*/
|
||||
export const getOrganizations = () =>
|
||||
serverClient.get<Organization[]>('/iam/get-user-organizations')
|
||||
|
||||
// Switch to a different organization
|
||||
/**
|
||||
* Switches the user's current team/organization context.
|
||||
* Returns a new auth token scoped to the selected team.
|
||||
* @param payload - Object containing the teamId to switch to
|
||||
* @returns Promise resolving to new authentication token for the selected team
|
||||
* @throws {ApiError} When team not found (404) or no access (403)
|
||||
*
|
||||
* @example
|
||||
* await setCurrentTeam({ teamId: 1 })
|
||||
*/
|
||||
export const setCurrentTeam = (payload: { teamId: number }) =>
|
||||
serverClient.post<{ data: { token: string } }>('/iam/set-current-team', payload)
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
/**
|
||||
* User API Service
|
||||
*/
|
||||
|
||||
import { serverClient } from './api'
|
||||
import type {
|
||||
UserProfileResponse,
|
||||
@@ -9,26 +13,86 @@ import type {
|
||||
TeamRoleResponse,
|
||||
} from '@/types/user'
|
||||
|
||||
// Profile Management
|
||||
/** Default TTL for API tokens: 5 years in seconds (5 * 365 * 24 * 60 * 60). */
|
||||
const DEFAULT_API_TOKEN_TTL_SECONDS = 157680000
|
||||
|
||||
/** Profile Management */
|
||||
|
||||
/**
|
||||
* Retrieves the current user's profile information.
|
||||
* @returns Promise resolving to user profile data including firstname, lastname, email and other user details.
|
||||
* @throws {ApiError} When not authenticated (401)
|
||||
*
|
||||
* @example
|
||||
* getUserProfile()
|
||||
*/
|
||||
export const getUserProfile = () =>
|
||||
serverClient.get<UserProfileResponse>('/user/profile')
|
||||
|
||||
/**
|
||||
* Updates the current user's profile information.
|
||||
* @param data - Profile fields to update (firstname, lastname, email, etc.)
|
||||
* @returns Promise resolving to success message
|
||||
* @throws {ApiError} When validation fails (400) or not authenticated (401)
|
||||
*
|
||||
* @example
|
||||
* updateUserProfile({ firstname: 'John', lastname: 'Doe', email: 'john.doe@example.com' })
|
||||
*/
|
||||
export const updateUserProfile = (data: UpdateProfilePayload) =>
|
||||
serverClient.put<{ message: string }>('/user/profile', data)
|
||||
|
||||
/**
|
||||
* Updates the current user's avatar image.
|
||||
* @param data - Avatar data including base64-encoded image
|
||||
* @returns Promise resolving to the new avatar URL
|
||||
* @throws {ApiError} When image format is invalid (400)
|
||||
*
|
||||
* @example
|
||||
* updateUserAvatar({ userAvatar: 'base64-encoded-image' })
|
||||
*/
|
||||
export const updateUserAvatar = (data: UpdateAvatarPayload) =>
|
||||
serverClient.post<{ data: string }>('/user/set-user-avatar', data)
|
||||
|
||||
// API Tokens (Developer Tools)
|
||||
/** API Tokens (Developer Tools) */
|
||||
|
||||
/**
|
||||
* Retrieves all API tokens for the current user.
|
||||
* @returns Promise resolving to list of API tokens with metadata
|
||||
* @throws {ApiError} When not authenticated (401)
|
||||
*
|
||||
* @example
|
||||
* getAPITokens()
|
||||
*/
|
||||
export const getAPITokens = () =>
|
||||
serverClient.get<APITokensResponse>('/user/get-dev-tokens')
|
||||
|
||||
export const createAPIToken = (label: string, ttl: number = 157680000) =>
|
||||
/**
|
||||
* Creates a new API token for developer tools access.
|
||||
* @param label - Display name for the token (e.g., 'Production API Key')
|
||||
* @param ttl - Time-to-live in seconds (default: 5 years)
|
||||
* @returns Promise resolving to the created token (only shown once)
|
||||
* @throws {ApiError} When not authenticated (401)
|
||||
*
|
||||
* @example
|
||||
* createAPIToken('Production API Key')
|
||||
*/
|
||||
export const createAPIToken = (label: string, ttl: number = DEFAULT_API_TOKEN_TTL_SECONDS) =>
|
||||
serverClient.post<APITokenResponse>('/user/generate-dev-token', {
|
||||
label,
|
||||
ttl, // Default: ~5 years
|
||||
ttl,
|
||||
} as CreateAPITokenPayload)
|
||||
|
||||
// Team/Role (needed for org initialization)
|
||||
/** Team/Role */
|
||||
|
||||
/**
|
||||
* Retrieves the user's role information for a specific team.
|
||||
* Used during organization initialization to verify permissions.
|
||||
* @param teamId - Team ID to get role for
|
||||
* @returns Promise resolving to team role information
|
||||
* @throws {ApiError} When team not found (404) or no access (403)
|
||||
*
|
||||
* @example
|
||||
* getTeamRoleId('123')
|
||||
*/
|
||||
export const getTeamRoleId = (teamId: string) =>
|
||||
serverClient.get<TeamRoleResponse>(`/iam/team/get-team-role-by-id/${teamId}`)
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
import '@testing-library/jest-dom'
|
||||
|
||||
// Mock window.matchMedia for components that use media queries
|
||||
Object.defineProperty(window, 'matchMedia', {
|
||||
writable: true,
|
||||
value: (query: string) => ({
|
||||
matches: false,
|
||||
media: query,
|
||||
onchange: null,
|
||||
addListener: () => {},
|
||||
removeListener: () => {},
|
||||
addEventListener: () => {},
|
||||
removeEventListener: () => {},
|
||||
dispatchEvent: () => false,
|
||||
}),
|
||||
})
|
||||
|
||||
// Mock ResizeObserver for components that use it
|
||||
global.ResizeObserver = class ResizeObserver {
|
||||
observe() {}
|
||||
unobserve() {}
|
||||
disconnect() {}
|
||||
}
|
||||
@@ -2,6 +2,7 @@
|
||||
"compilerOptions": {
|
||||
"target": "ES2020",
|
||||
"lib": ["ES2020", "DOM", "DOM.Iterable"],
|
||||
"types": ["vitest/globals"],
|
||||
"module": "ESNext",
|
||||
"moduleResolution": "bundler",
|
||||
"jsx": "react-jsx",
|
||||
@@ -23,5 +24,5 @@
|
||||
"isolatedModules": true,
|
||||
"resolveJsonModule": true
|
||||
},
|
||||
"include": ["src"]
|
||||
"include": ["src", "vitest.config.ts"]
|
||||
}
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
import { defineConfig } from 'vitest/config'
|
||||
import react from '@vitejs/plugin-react'
|
||||
import path from 'path'
|
||||
|
||||
export default defineConfig({
|
||||
plugins: [react()],
|
||||
test: {
|
||||
globals: true,
|
||||
environment: 'jsdom',
|
||||
setupFiles: ['./src/test/setup.ts'],
|
||||
include: ['src/**/*.{test,spec}.{ts,tsx}'],
|
||||
coverage: {
|
||||
provider: 'v8',
|
||||
reporter: ['text', 'json', 'html'],
|
||||
include: ['src/**/*.{ts,tsx}'],
|
||||
exclude: [
|
||||
'src/**/*.{test,spec}.{ts,tsx}',
|
||||
'src/test/**/*',
|
||||
'src/main.tsx',
|
||||
'src/vite-env.d.ts',
|
||||
],
|
||||
},
|
||||
},
|
||||
resolve: {
|
||||
alias: {
|
||||
'@': path.resolve(__dirname, './src'),
|
||||
},
|
||||
},
|
||||
})
|
||||
Generated
+6898
-124
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user