This post is a summary of the full Proof of Prompt specification. It is v0.1 because this is what we are shipping on devnet, and we expect to iterate before mainnet.
What a receipt contains
Every Proof of Prompt receipt is a seven-field tuple anchored on Celestia and signed by an attestor quorum:
{
"prompt_hash": sha256(canonical_prompt_bytes),
"output_hash": sha256(canonical_output_bytes),
"model": "claude-4.7-opus",
"temperature": 0.2,
"timestamp": 1745193600,
"user_wallet": "0x7d2a...c91e",
"attestor_sig": <BLS aggregate over quorum>
}
Canonical bytes
Hashing is only useful if two parties agree on what they hashed. Prompts and outputs are canonicalised before hashing: UTF-8, normalized line endings, tokenizer-agnostic. Binary inputs (images, audio) are hashed over their raw bytes.
Attestor quorum
Each receipt is signed by a threshold (2/3) of the active attestor set. Attestors stake $LGT and earn fees per receipt. Equivocation is slashable. The BLS aggregate signature means a receipt takes the same on-chain footprint regardless of quorum size.
Redaction (Phase 2)
Phase 2 of the spec adds ZK redaction via SP1 or RISC Zero. A redactable receipt proves the fields are internally consistent (the hashes match, the model was in the registry, the timestamp is within the block's range) without revealing prompt or output content. Enterprise users with IP-sensitive prompts get proof without disclosure.
Verifying a receipt
Verification is three checks:
$ ligate verify 0x91c4...ea3d
✓ receipt anchored (celestia-mocha block 1,481,203)
✓ attestor quorum met (41/62 signed, threshold 42/62)
✓ BLS aggregate valid
All three must pass for a receipt to be considered valid. Any one failure invalidates the receipt.
What's not in v0.1
Model attestation. We hash the model ID string but don't (yet) verify that the weights the provider claims to use are actually the weights that ran. That's a Phase 3 discussion involving TEEs, remote attestation, and MPC inference.
Chain-of-prompt. Prompts often depend on prior prompts (tools, retrieval, agent loops). v0.1 treats each as independent. v0.2 will encode causal chains so you can verify not just a single prompt, but a multi-step agent run.
Full spec and RFCs open on GitHub.