mindX provides a production-ready API for agents, UIs, and external systems with enterprise security, authentication, and monitoring. Read the docs at the interactive Swagger UI when the backend is running.
Port: The mindX backend runs by default on port 8000. In the UI, the API tab (main menu) includes a direct link to the API information mindX provides: the interactive Swagger UI at that base URL.
📚 Complete Documentation: See API Documentation for comprehensive API reference with authentication, endpoints, examples, and SDK usage patterns.
/health and /health/detailed for system monitoring/agents for agent registry and lifecycle management/users/register-with-signature and session validationhttp://localhost:8000https://agenticplace.pythai.net (when deployed)Use http://localhost:8000/docs to browse all endpoints, try requests, and inspect request/response schemas.
📖 Complete API Reference: API Documentation - Comprehensive guide with authentication flows, endpoint reference, examples, and SDK usage
/agenticplace/api/api/monitoring/inbound/mindxagent/rage/llmMonitoring and rate control apply in both directions (inbound and outbound). See docs/monitoring_rate_control.md for scientific network and data metrics (ms, bytes, req/min, tokens).
AgenticPlace uses mindX as its provider: the frontend calls the mindX backend; mindX uses Ollama (and other LLM providers) for inference.
Endpoints:
/agenticplace/agent/call/agenticplace/ollama/ingest/agenticplace/ceo/statusAgenticPlace frontend config:
VITE_MINDX_API_URL to the mindX backend base URL (e.g. http://localhost:8000). Default is http://localhost:8000.${baseUrl}/agenticplace/agent/call and ${baseUrl}/agenticplace/ollama/ingest.For AgenticPlace to use Ollama via mindX:
11434.
- Test: curl http://localhost:11434/api/tags
models/ollama.yaml → base_url: http://localhost:11434 (or your Ollama URL).
- Override: MINDX_LLM__OLLAMA__BASE_URL in .env, or llm.ollama.base_url in data/config.
- See llm/RESILIENCE.md and docs/rate_limiting_optimization.md for fallback and rate limits.
/agenticplace/ollama/ingest and for agent inference when using Ollama.
GET /api/admin/ollama/models (or the equivalent admin Ollama endpoint).
- Try POST /agenticplace/ollama/ingest with body {"prompt": "Hello", "model": "your-model"}.
Whether mindX is ingesting, providing inference, or services, monitoring and rate control are essential in both directions. Actual network and data metrics (scientific units: ms, bytes, req/min) are defined in docs/monitoring_rate_control.md. Inbound: GET /api/monitoring/inbound. Outbound: rate limiter and provider metrics.
docs/AgenticPlace_Deep_Dive.md (API reference section).docs/ollama_api_integration.md, api/ollama/ollamaapi.md, models/ollama.yaml.llm/llm_factory.md.llm/RESILIENCE.md.docs/monitoring_rate_control.md, docs/rate_limiting_optimization.md.