MindXAgent now includes persistent Ollama chat capabilities with dynamic model discovery and adaptation. This enables mindXagent to:
keep_aliveagents/core/ollama_chat_manager.py)agents/core/mindXagent.py)chat_with_ollama() - Main chat interface
- get_available_ollama_models() - List available models
- select_ollama_model() - Select best model for task
- get_ollama_conversation_history() - Get conversation history
- clear_ollama_conversation() - Clear conversation history
keep_alive="10m" to keep models loaded in memorydata/ollama_chat_history.json# Get mindXagent instance
mindx_agent = await MindXAgent.get_instance()
Chat with default model
result = await mindx_agent.chat_with_ollama(
message="Analyze the current system state and suggest improvements",
temperature=0.7,
max_tokens=2000
)
if result.get("success"):
print(f"Response: {result['content']}")
print(f"Model: {result['model']}")
print(f"Latency: {result['latency']:.2f}s")
result = await mindx_agent.chat_with_ollama(
message="Explain quantum computing",
model="mistral-nemo:latest",
temperature=0.5
)
result = await mindx_agent.chat_with_ollama(
message="What should I do next?",
system_prompt="You are a helpful AI assistant specialized in software development.",
conversation_id="dev_assistant"
)
models = await mindx_agent.get_available_ollama_models()
for model in models:
print(f"Model: {model['name']}")
print(f" Size: {model.get('size', 'unknown')}")
print(f" Parameters: {model.get('details', {}).get('parameter_size', 'unknown')}")
# Select best model for reasoning
model = await mindx_agent.select_ollama_model(
task_type="reasoning",
preferred_models=["mistral-nemo:latest", "deepseek-r1:latest"]
)
history = mindx_agent.get_ollama_conversation_history("dev_assistant")
for msg in history:
print(f"{msg['role']}: {msg['content'][:100]}...")
mindx_agent.clear_ollama_conversation("dev_assistant")
# Ollama server URL
export MINDX_LLM__OLLAMA__BASE_URL="http://10.0.0.155:18080"
{
"llm": {
"ollama": {
"base_url": "http://10.0.0.155:18080"
}
}
}
base_url: Ollama server URL (default: http://localhost:11434)model_discovery_interval: Seconds between periodic model list refresh (default: 86400 = once per day). Use manual discover_models(force=True) or GET /mindxagent/ollama/status (which can trigger refresh) to update sooner.keep_alive: How long to keep models loaded (default: "10m")conversation_history_path: Path to save conversation historyModels are automatically discovered:
discover_models(force=True) is calledWhen new models are discovered:
Each model's capabilities are tracked:
{agent_id}_defaultdata/ollama_chat_history.json{
"sessions": {
"mindx_meta_agent_default": [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi! How can I help?"}
]
},
"last_saved": "2026-01-17T22:00:00"
}
chat: General conversation (default)reasoning: Complex reasoning taskscoding: Code generation and analysismultimodal: Vision and image tasks# For reasoning task
model = await mindx_agent.select_ollama_model("reasoning")
Prefers: nemo, reasoning, thinking, deepseek models
For coding task
model = await mindx_agent.select_ollama_model("coding")
Prefers: code, codellama, deepseek-coder models
OllamaChatManager is automatically initialized during mindXagent's _async_init():
All chat interactions are logged to MemoryAgent:
["chat", "ollama", "interaction"]Chat requests are logged to thinking process:
ollama_chat_request: Request detailsollama_chat_response: Response metadataollama_chat_error: Error information# Initialize
mindx_agent = await MindXAgent.get_instance()
Discover models
models = await mindx_agent.get_available_ollama_models()
print(f"Available models: {[m['name'] for m in models]}")
Select best model for reasoning
model = await mindx_agent.select_ollama_model("reasoning")
print(f"Selected model: {model}")
Start conversation
result = await mindx_agent.chat_with_ollama(
message="What are the key principles of self-improving AI systems?",
model=model,
conversation_id="ai_discussion",
system_prompt="You are an expert in AI systems and self-improvement.",
temperature=0.7
)
Continue conversation
result = await mindx_agent.chat_with_ollama(
message="How can these principles be applied to mindX?",
conversation_id="ai_discussion"
)
View history
history = mindx_agent.get_ollama_conversation_history("ai_discussion")
print(f"Conversation has {len(history)} messages")
Clear when done
mindx_agent.clear_ollama_conversation("ai_discussion")