mindX supports multiple ways to connect to Ollama servers:
api/ollama_url.py) - Default, includes rate limiting and metricsapi/ollama_official.py) - Optional, better compatibilityThe official ollama-python library provides:
pip install ollama
from api.ollama_official import create_ollama_client, OfficialOllamaAdapter
Create client (auto-detects if official library is available)
client = create_ollama_client(base_url="http://localhost:11434")
if client:
# Use official library
models = await client.list_models()
response = await client.generate_text(
prompt="Hello, world!",
model="llama3:8b"
)
else:
# Fallback to custom implementation
from api.ollama_url import create_ollama_api
api = create_ollama_api()
models = await api.list_models()
The official library supports Ollama Cloud models:
import os
from api.ollama_official import OfficialOllamaAdapter
Connect to Ollama Cloud
client = OfficialOllamaAdapter(
base_url="https://ollama.com",
api_key=os.environ.get("OLLAMA_API_KEY")
)
Use cloud models
response = await client.generate_text(
prompt="Hello!",
model="gpt-oss:120b-cloud"
)
The custom implementation (api/ollama_url.py) provides:
This is the default implementation and works without additional dependencies.
Both implementations respect the same configuration:
export MINDX_LLM__OLLAMA__BASE_URL=http://localhost:11434
export OLLAMA_API_KEY=your_key_here # For cloud
from webmind.settings import SettingsManager
settings = SettingsManager()
base_url = settings.get('ollama_base_url', 'http://localhost:11434')
from utils.config import Config
config = Config()
base_url = config.get('llm.ollama.base_url', 'http://localhost:11434')