The Ollama Model Capability Tool (api/ollama/ollama_model_capability_tool.py) manages Ollama model capabilities and enables intelligent, task-specific model selection for mindX. It maintains a registry of available Ollama models with their capabilities, performance metrics, and task-specific suitability scores.
data/config/ollama_model_capabilities.jsonfrom api.ollama.ollama_model_capability_tool import OllamaModelCapabilityTool
from utils.config import Config
config = Config()
tool = OllamaModelCapabilityTool(config=config)
# Discover models from Ollama server
models = await tool.discover_models(base_url="http://localhost:11434")
# Automatically discover and register all models with intelligent scoring
result = await tool.auto_discover_and_register(
base_url="http://localhost:11434",
auto_score=True
)
# Register a model with specific capabilities
await tool.register_model(
model_name="mistral-nemo:latest",
capabilities=["code", "reasoning", "chat"],
task_scores={
"code_generation": 0.9,
"reasoning": 0.85,
"simple_chat": 0.95
},
size_gb=7.2,
context_size=32768,
notes="Excellent for coding and reasoning tasks"
)
# Get the best model for a specific task
best_model = tool.get_best_model_for_task("code_generation", min_score=0.7)
# Get all registered capabilities
all_caps = tool.get_all_capabilities()
Get specific model info
model_info = tool.get_model_info("mistral-nemo:latest")
The tool supports the following task types (with auto-detection):
@dataclass
class ModelCapability:
model_name: str
size_gb: float
context_size: int
capabilities: List[str] # e.g., ["code", "reasoning", "chat"]
task_scores: Dict[str, float] # Task type -> score (0-1)
performance_metrics: Dict[str, Any]
last_tested: Optional[str]
notes: str
The Startup Agent automatically uses this tool when Ollama is connected:
.envModel capabilities are stored in:
data/config/ollama_model_capabilities.jsonThe Startup Agent uses this tool to:
# Initialize tool
tool = OllamaModelCapabilityTool()
Auto-discover models
result = await tool.auto_discover_and_register(
base_url="http://10.0.0.155:18080"
)
Select best model for analysis
analysis_model = tool.get_best_model_for_task("analysis")
print(f"Best model for analysis: {analysis_model}")
Get model details
model_info = tool.get_model_info(analysis_model)
print(f"Capabilities: {model_info['capabilities']}")
print(f"Task scores: {model_info['task_scores']}")