OpenAI-compatible Python client. Drop-in for any framework using openai SDK shape. Captures ZG-Res-Key attestation header per call.
→ llm/zerog_handler.py
Storage
Node TS sidecar wrapping @0glabs/0g-ts-sdk. Localhost-only. Endpoints: POST /upload, GET /retrieve/:root, GET /health. Solves Python ↔ JS impedance.
→ openagents/sidecar/index.ts
Galileo Chain
One-command deploy of iNFT-7857 + DatasetRegistry. Outputs land in openagents/deployments/galileo.json. Anchors agent reasoning steps via THOT.commit().
→ openagents/deploy/deploy_galileo.sh
2 · Live Sidecar Probe
Calls GET http://127.0.0.1:7878/health on this host. Sidecar binds localhost-only;
this UI proxies through the backend if available, otherwise reports unreachable.
3 · 0G Galileo RPC Probe
Public testnet RPC at https://evmrpc-testnet.0g.ai. Read chainId + block height + gasPrice
directly from the chain — no sidecar required.
4 · ZG-Res-Key Attestation Format
Every llm.zerog_handler.generate_text call captures the response header
ZG-Res-Key, which is the cryptographic attestation that the inference happened.
The handler stores it as llm.last_attestation for downstream verification.
async def generate_text(self, prompt, model="zerog/gpt-oss-120b"):
response = await self._client.chat.completions.create(
model=model,
messages=[{"role":"user","content":prompt}],
# ...
)
# The 0G provider sets ZG-Res-Key in the response headers
self.last_attestation = response.response.headers.get("zg-res-key")
return response.choices[0].message.content
# Then anywhere downstream:
chat_id = llm.last_attestation
# → use as proof in Boardroom voting, iNFT mint, on-chain attestation, etc.
The link between 0G Compute, 0G Storage, and 0G Chain. Each agent reasoning step:
runs inference (Compute) → writes the trace bytes (Storage) → anchors the (root, chat_id)
tuple on chain (Galileo). 14/14 Foundry tests passing.