* fix: propagate contextvars to tool executor threads
run_in_executor does not propagate contextvars to worker threads,
causing execution context params like data_dir to be lost when MCP
tools are called. This made save_data, serve_file_to_user, and other
tools that depend on auto-injected data_dir fail with "Missing
required argument: data_dir".
Fix: use contextvars.copy_context().run() to carry the current context
into the thread pool worker.
* test: regression test for contextvars propagation in tool executor
Verifies that execution context (data_dir, etc.) set via
set_execution_context is visible inside tool executors that run
in thread pool workers via run_in_executor.
Windows has no POSIX executable bits, so the stat-based check always
fails. Skip the check on Windows in the validator, and mark the two
related tests as POSIX-only. Unix CI still catches non-executable
scripts from Windows contributors.
Fixes#6893
Validate that resolved session path stays within the sessions directory
using Path.is_relative_to(). Prevents session_id values like
"../../something" from escaping the sandbox.
Also guard the caller in _write_run_event where get_session_path is
called outside the existing OSError try/except block.
Fixes#1000
Co-authored-by: Sidhartha kumar <Alearner12@users.noreply.github.com>
* refactor: remove deprecated storage/backend.py (267 lines)
Delete the fully deprecated FileStorage class and inline its 5 still-active
methods (_validate_key, _load_run_sync, _load_summary_sync, _delete_run_sync,
_list_all_runs_sync) directly into ConcurrentStorage.
Changes:
- Delete core/framework/storage/backend.py (267 lines of no-op/deprecated code)
- Inline active read methods into ConcurrentStorage (no new FileStorage dep)
- Remove deprecated index operations (get_runs_by_goal, get_runs_by_status,
get_runs_by_node, list_all_goals) and their associated locking
- Update __init__.py to export ConcurrentStorage instead of FileStorage
- Update runtime/core.py to use ConcurrentStorage directly
- Fix Runtime.end_run() to call save_run_sync() (sync wrapper) instead of
the async save_run(), which was silently dropping the coroutine
- Update test_path_traversal_fix.py to test ConcurrentStorage._validate_key()
- Clean up test_storage.py — remove all FileStorage test classes, un-skip
ConcurrentStorage tests now that it's self-contained
- Remove stale FileStorage references from testing/test_storage.py docstring,
testing/debug_tool.py docstring, and test_runtime.py skip reasons
All 44 tests pass, ruff check and ruff format clean.
Fixes#6797
* fix(core): address CodeRabbitAI PR review feedback
- Fix critical no-op in ConcurrentStorage._save_run_sync by implementing atomic persistence to
uns/{run_id}.json.
- Update est_path_traversal_fix.py to test ConcurrentStorage directly and use real file paths for end-to-end validation.
- Unskip est_run_saved_on_end and assert actual run file persistence.
- Fix debug_tool.py to use load_run_sync() instead of the async load_run().
* fix(core): address round 2 of CodeRabbitAI reviews
- Add _validate_key to _save_run_sync and _load_summary_sync to enforce path traversal protections on the lowest level APIs.
- Invalidate summary cache and refresh run cache in save_run_sync() to match the async save_run() cache coherence behavior.
- Add tests for load_summary and save_run_sync path traversal rejection.
* feat(quickstart): add Local (Ollama) LLM provider option
- Detect Ollama via 'ollama list' in quickstart.sh and quickstart.ps1
- Add 'Local (Ollama)' menu option with interactive model picker
- Save provider=ollama, model=<selected> to ~/.hive/configuration.json
- Omit api_key_env_var for Ollama (no API key required)
Refs #5154, #5231
* feat: add local Ollama support and resolve native tool calling
This integrates Ollama as a first-class local provider choice during quickstart, and patches several configuration barriers preventing local models from safely executing the framework's agent graphs.
* **Quickstart Integration**: Added `Local (Ollama)` to the provider menu in both quickstart.sh and quickstart.ps1. When selected, it automatically queries `ollama list` and allows the user to pick an installed model without prompting for an API key.
* **Routing & Configuration**: Automatically sets `"api_base": "http://localhost:11434"` so LiteLLM routes correctly to the local daemon, and increases the default max_tokens config.py allocation to `32768`.
* **Native Tool Calling**: Normalized Ollama models to strictly use the ollama_chat provider prefix inside litellm.py and registered them as `supports_function_calling: True`. This forces native structured function calling and fixes the infinite loop caused by JSON-mode text fallbacks.
* **Context Truncation Fix**: Updated config.py to explicitly pass `"num_ctx": 16384` to Ollama. This prevents the local daemon from silently truncating the Queen agent's ~9,500 token system prompt (Ollama defaults to 2048 `num_ctx`).
* **UX Warnings**: Added terminal notices warning users to select high-parameter models (e.g., `qwen2.5:72b+`) to ensure sufficient contextual reasoning abilities.
Resolves#6027Resolves#6028
* test: add unit tests for Ollama helper functions
Cover _is_ollama_model(), _ensure_ollama_chat_prefix(), and num_ctx
injection in get_llm_extra_kwargs() as requested in PR review.
Fix existing test_init_ollama_no_key_needed assertion to expect the
normalised ollama_chat/ prefix.
Made-with: Cursor
* chores: fixed merge conflict
* fix(ollama): address PR review comments and normalize provider config
* fix(ollama): align quickstart defaults and add tool_choice comment
* fix(ollama): enforce OLLAMA_DETECTED logic and resolve quickstart script syntax errors
* fix(ollama): align quickstart logic and cleanup test imports
Python 3.9+ no longer wraps subscript slices in ast.Index, and
Python 3.12 removed ast.Index entirely. The project requires
Python >=3.11, so this is dead code.