Status: ✅ Production Ready - Enterprise deployment with encrypted vault security
This guide provides instructions on how to set up, configure, and use the MindX production-ready autonomous AI system. Choose between development setup for local testing or production deployment for enterprise use.
# One-command production deployment with security hardening
./deploy/production_deploy.sh
Features:
Documentation: See Production Deployment Guide
Continue with the detailed setup instructions below.
Before you begin, ensure you have the following installed:
ollama pull deepseek-coder:6.7b-instruct
ollama pull nous-hermes2:latest
# or other models like llama3, gemma, etc.
- Ensure the Ollama server is running (usually ollama serve or started by the Ollama application).
git clone <repository_url>
cd augmentic_mindx
If you received the code as a directory, navigate into the augmentic_mindx root directory.
python3 -m venv .venv
source .venv/bin/activate # On Linux/macOS
# .venv\Scripts\activate # On Windows PowerShell
pyproject.toml to define dependencies. Install them using pip:
pip install -e .[dev]
- The -e . installs the mindx package in "editable" mode, meaning changes to the source code are immediately reflected without reinstalling.
- [dev] installs both runtime and development dependencies (like pytest for testing and ruff for linting/formatting). If you only need to run MindX, you can omit [dev]: pip install .
Alternatively, if you have a requirements.txt file:
pip install -r requirements.txt
# (And potentially pip install -r requirements-dev.txt for development tools)
For production deployment, sensitive data is automatically stored in the encrypted vault:
# Migrate existing API keys to encrypted storage
from scripts.migrate_to_encrypted_vault import migrate_to_encrypted_vault
migrate_to_encrypted_vault()
Store new API keys in encrypted vault
from mindx_backend_service.encrypted_vault_manager import get_encrypted_vault_manager
vault = get_encrypted_vault_manager()
vault.store_api_key("openai", "your-openai-key")
vault.store_api_key("anthropic", "your-anthropic-key")
MindX uses a layered configuration system. Settings are loaded with the following precedence (later sources override earlier ones):
1. Initial code defaults.
2. mindx_config.json file (optional, in project root or data/config/).
3. .env file(s) in project root and then current working directory.
4. Actual environment variables prefixed with MINDX_.
Setting up .env (Most Common Configuration Method):
augmentic_mindx/), create a file named .env..env.example file exists, you can copy it to .env as a template).
.env file to specify your settings. This is where you should put secrets like API keys. Example .env content:
# --- General Configuration ---
MINDX_LOG_LEVEL="INFO" # Recommended: DEBUG, INFO, WARNING, ERROR, CRITICAL
MINDX_ENVIRONMENT="development" # Options: "development", "production"
# --- Production Security (for production deployment) ---
MINDX_SECURITY_ENCRYPTION_ENABLED="true"
MINDX_SECURITY_RATE_LIMITING_ENABLED="true"
MINDX_SECURITY_CORS_ORIGINS="https://agenticplace.pythai.net,https://your-domain.com"
# --- Default LLM Provider for the whole system ---
MINDX_LLM__DEFAULT_PROVIDER="ollama" # Options: "ollama", "gemini", "openai", "anthropic"
# --- Ollama Specific Configuration (if default_provider or any agent uses ollama) ---
MINDX_LLM__OLLAMA__DEFAULT_MODEL="nous-hermes2:latest" # General purpose model
MINDX_LLM__OLLAMA__DEFAULT_MODEL_FOR_CODING="deepseek-coder:6.7b-instruct" # Model good at code
# MINDX_LLM__OLLAMA__BASE_URL="http://localhost:11434" # Usually default
# --- Gemini Specific Configuration (if default_provider or any agent uses gemini) ---
# IMPORTANT: Get your API key from Google AI Studio
GEMINI_API_KEY="YOUR_GEMINI_API_KEY_HERE" # Non-prefixed, might be used by SDK directly if not found by MindX prefix
MINDX_LLM__GEMINI__API_KEY="YOUR_GEMINI_API_KEY_HERE" # MindX prefixed
MINDX_LLM__GEMINI__DEFAULT_MODEL="gemini-1.5-flash-latest"
MINDX_LLM__GEMINI__DEFAULT_MODEL_FOR_CODING="gemini-1.5-pro-latest" # Or flash if pro is too slow/costly
# --- SelfImprovementAgent (SIA) LLM Configuration ---
# Specifies the LLM the SIA uses for its internal analysis and code generation.
MINDX_SELF_IMPROVEMENT_AGENT__LLM__PROVIDER="ollama"
MINDX_SELF_IMPROVEMENT_AGENT__LLM__MODEL="deepseek-coder:6.7b-instruct"
MINDX_SELF_IMPROVEMENT_AGENT__DEFAULT_MAX_CYCLES="1" # How many improvement iterations SIA runs per call
MINDX_SELF_IMPROVEMENT_AGENT__CRITIQUE_THRESHOLD="0.6" # Min LLM critique score for a change to be good
# --- CoordinatorAgent LLM Configuration ---
# LLM used by Coordinator for tasks like system-wide analysis.
MINDX_COORDINATOR__LLM__PROVIDER="ollama"
MINDX_COORDINATOR__LLM__MODEL="nous-hermes2:latest"
MINDX_COORDINATOR__SIA_CLI_TIMEOUT_SECONDS="900.0" # 15 minutes timeout for SIA subprocess call
# --- Coordinator's Autonomous Improvement Loop ---
MINDX_COORDINATOR__AUTONOMOUS_IMPROVEMENT__ENABLED="false" # Set to "true" to enable autonomous mode
MINDX_COORDINATOR__AUTONOMOUS_IMPROVEMENT__INTERVAL_SECONDS="3600" # Check every 1 hour
MINDX_COORDINATOR__AUTONOMOUS_IMPROVEMENT__REQUIRE_HUMAN_APPROVAL_FOR_CRITICAL="true"
# Critical components list is in mindx/utils/config.py _set_final_derived_defaults, can be overridden by JSON config
# Example: MINDX_COORDINATOR__AUTONOMOUS_IMPROVEMENT__CRITICAL_COMPONENTS='["mindx.learning.self_improve_agent", "mindx.orchestration.coordinator_agent"]' (JSON string list)
# --- Monitoring ---
MINDX_MONITORING__RESOURCE__ENABLED="true" # Set to true to activate resource monitor
MINDX_MONITORING__RESOURCE__INTERVAL="15.0" # Check resources every 15 seconds
MINDX_MONITORING__PERFORMANCE__ENABLE_PERIODIC_SAVE="true" # Enable periodic save for perf metrics
MINDX_MONITORING__PERFORMANCE__PERIODIC_SAVE_INTERVAL_SECONDS="300" # Save perf metrics every 5 mins
The primary way to interact with and run the MindX system is through the CoordinatorAgent's Command Line Interface (CLI).
Start the Coordinator Agent:
Navigate to the project root directory (augmentic_mindx/) in your terminal (with the virtual environment activated) and run:
```bash
python scripts/run_mindx_coordinator.py
Use code with caution.
Markdown
You should see log messages indicating initialization and then the MindX CLI > prompt.
Interacting via the MindX CLI:
Type help at the prompt to see available commands. Key commands include:
query Ask SIA to improve a utility function, providing context in a file
python mindx/learning/self_improve_agent.py mindx/utils/some_util.py --context-file my_improvement_goal.txt --output-json