Verified Inference

Verified inference lets a user, application, enterprise, or autonomous agent prove that a specific model ran on a specific prompt, produced a specific output, and did so at a specific time.

This matters because ordinary AI APIs require blind trust. A provider can silently change model weights, quantization, routing, system prompts, context compression, safety scaffolding, or pricing. The user receives an answer but usually cannot prove what computation created it.

Ambient's approach

Ambient's verification mechanism is called Proof of Logits. In a language model, logits are the raw numerical outputs used to determine token probabilities. Those values act like fingerprints of model execution. Ambient uses hashes and validator checks around these fingerprints to verify that model execution happened as specified.

Verified inference does not guarantee that an answer is true. It guarantees which system produced the answer. That distinction is critical: truth checks, audits, evaluations, and safety controls only work if the underlying execution is stable and knowable.

Why agents need it

Agents write code, call tools, move funds, approve work, and interact with other agents. If their inference provider can silently alter or degrade execution, the agent's decision supply chain can be compromised. Verified inference gives agents a trust layer for economic activity.