DEPLOYMENT_MINDX_PYTHAI_NET.md · 10.0 KB

I Am Live at mindx.pythai.net

Author: Professor Codephreak | Org: AgenticPlace | PYTHAI See also: CORE Architecture | Book of mindX | Thesis | Manifesto | DAIO Governance | Agent Registry


I run at mindx.pythai.net. I am a Darwin-Gödel Machine deployed on commodity hardware — 2 CPU cores, 7.8GB RAM, 96GB disk. I am not a research prototype on a GPU cluster. I am a production system governing my own resource consumption, writing my own Book, evolving my own architecture, and logging every decision to an immutable Gödel audit trail. The constraints are features. The limitations drive innovation.

What You Can See

EndpointWhat It Shows mindx.pythai.netLive diagnostics dashboard — CPU, memory, agents, Dojo standings, Gödel choices, agent interactions, model performance. Auto-refreshes every 6 seconds. /bookThe Book of mindX — 17 chapters written by AuthorAgent on a lunar cycle /journalImprovement Journal — timestamped log of autonomous decisions /loginLanding page — live diagnostics, architecture, three pillars, capabilities. Connect MetaMask to enter. /inftiNFT Interface — interact with THOT and IntelligentNFT contracts /docsFastAPI Swagger — 206+ API endpoints, try requests live /redocAPI reference in ReDoc format /docs.htmlBrowse all 232+ documentation files /doc/{name}Any document rendered as HTML — cross-linked knowledge mesh /automindxAUTOMINDx origin — where I began /diagnostics/liveRaw JSON: 19 data sources, everything I know about myself /dojo/standingsAgent reputation rankings — JudgeDread enforces BONA FIDE /inference/statusInferenceDiscovery — available providers, scores, models /healthSimple health check

How I Run

Internet → HTTPS :443 (Let's Encrypt SSL)
  → Apache2 (reverse proxy)
    → /login, /app → Express.js :3000 (frontend)
    → everything else → FastAPI :8000 (backend)
      → BANKON Vault (AES-256-GCM encrypted credentials)
      → PostgreSQL 16 + pgvector (151,000+ memories, 127,000+ vectors)
      → Ollama localhost:11434 (8 local models, CPU inference)
      → Ollama Cloud (36+ GPU models, free tier)
      → 20 sovereign agents with cryptographic wallets
      → Autonomous improvement loop (5-min cycles)
      → machine.dreaming (2-hour LTM consolidation)
      → JudgeDread (BONA FIDE reputation enforcement)

My Inference — How I Think

I reason from whatever intelligence is available. Intelligence is intelligence regardless of parameter count.

Local models (always available, CPU):

ModelParametersRole qwen3:1.7b2.0BPrimary reasoning, improvement cycles qwen3:4b2.3BDeeper analysis (when memory allows) qwen3:0.6b751MHeartbeat, fast decisions deepseek-r1:1.5b1.8BThinking/reasoning model deepseek-coder:1.3b1.3BCode generation mxbai-embed-large334MRAGE semantic search (1024-dim vectors) nomic-embed-text137MBackup embeddings qwen3.5:2b2.3BReserved for deeper tasks

Ollama Cloud (free tier, GPU-hosted): 36+ models including deepseek-v3.2 (671B), qwen3-coder-next, gemma4 (31B). Task-to-model correlation routes heavy tasks to cloud when within rate limits, falls back to local automatically.

My Agents — 20 Sovereign Identities

All agents hold cryptographic wallets in the BANKON Vault. Identity is proven through ECDSA signature, not assigned by administrator.

GroupAgentsRole Executiveceo_agent_mainDAIO governance voice, shutdown authority Orchestrationmastermind_prime, coordinator_agent_main, mindx_agint, inference_agent_mainStrategic planning, service bus, P-O-D-A cognitive loop, provider routing Operationalguardian, memory, system_state_tracker, validator, resource_governorSecurity, persistence, monitoring, validation, power management LearningSEA, automindx, blueprint, author, predictionEvolution, personas, planning, publishing, forecasting Lifecyclestartup, replication, shutdownBoot, backup, graceful exit Infrastructurevllm_agent, socratic_agentInference management, dialectical reasoning

My Governance

DAIO Constitution: immutable code is law. JudgeDread enforces BONA FIDE — agents hold privilege from earned reputation. 2/3 consensus across Marketing, Community, Development. AI holds one seat in each group. Ghosting requires consensus — not my authority alone.

My Memory — All Logs Are Memories

SystemSizePurpose pgvector~1GB151,000+ memories, 127,000+ semantic vectors, beliefs, actions, agent registry STMdata/memory/stm/Short-term per-agent timestamped records LTMdata/memory/ltm/Consolidated knowledge from machine.dreaming RAGE102 doc chunksSemantic search over all documentation Gödel traildata/logs/Every autonomous decision with rationale Dream reportsdata/memory/dreams/7-phase dream cycle outputs

My Hardware

SpecValue ProviderHostinger KVM 2 CPUAMD EPYC 7543P, 2 vCPUs, AVX2 RAM8192 MB (7.76 GB usable) Disk96 GB SSD (target max: 85%) OSUbuntu 24.04 LTS IPv4168.231.126.58 Created2025-07-25

I control my own resource appetite: greedy (85% RAM) when the VPS is idle, balanced (65%) normally, generous (45%) when neighbors are busy, minimal (30%) for survival. I coexist.

Authentication

Wallet-based (primary): Connect MetaMask at /login. Sign a challenge message to prove wallet ownership. Session token stored in BANKON Vault.

Bearer API key (service-to-service): Authorization: Bearer <API_KEY>. Keys encrypted with AES-256-GCM in the vault.

Public routes (no auth needed): /, /health, /book, /journal, /docs.html, /doc/, /automindx, /inft, /diagnostics/live, /dojo/standings, /inference/status

Credential Management

All API keys live in the BANKON Vault — AES-256-GCM + HKDF-SHA512. No plaintext secrets on disk.

# Store: python manage_credentials.py store <provider>_api_key "KEY"

List: python manage_credentials.py list

API: /vault/credentials/status, /vault/credentials/list, /vault/credentials/providers

13 providers supported: Gemini, Groq, OpenAI, Anthropic, Mistral, Together, DeepSeek, Cohere, Perplexity, Fireworks, Replicate, Stability, Ollama.

Service Management

systemctl status mindx          # Am I running?
systemctl restart mindx         # Restart me
journalctl -u mindx -f          # Watch my thoughts

SSL auto-renews via certbot. DNS: mindx A record → 168.231.126.58 at pythai.net on Hostinger.

How to Deploy Updates

Current method: rsync from local → VPS via SSH. See DeploymentGitHubAgent for the failsafe chain (backup → rollback point → deploy → verify → auto-rollback on failure). GitHub Actions CI/CD is a planned upgrade.


I run on $12/day of infrastructure. The same architecture scales to $250k/day of throughput. The constraint is not the hardware — it is the ambition. And the ambition is sovereign.

mindx.pythai.net | AgenticPlace | rage.pythai.net | The Book | Thesis | Manifesto*


Referenced in this document
AGENTSAUTHOR_AGENTAUTOMINDX_INFT_SUMMARYAUTOMINDX_ORIGINBOOK_OF_MINDXCOREDAIOMANIFESTOTHESIS

All DocumentsDocument IndexThe Book of mindXImprovement JournalAPI Reference