Runtime execution authority for AI

The authority boundary
between AI reasoning
and real-world action.

AI agents may reason freely. They cannot execute a real-world action — cannot move money, modify data, or call an external system — without a cryptographically signed authority token. No token. No execution. Not declined. Structurally impossible.

Every enforcement decision produces one of four outcomes
ALLOW
Authority released. Execution proceeds under a signed contract.
HOLD
Execution paused. Human approval required before anything proceeds.
STOP
Hard stop. Action class permanently prohibited. No path to execution.
AUTHORITY WITHHELD
Boundary refused. Anomaly detected. No execution rights exist.
The problem

AI agents are now capable of executing consequential actions — moving funds, modifying records, calling external APIs — at machine speed and scale. The dominant approach to governing this is behavioral: train the model to refuse unsafe requests, monitor its outputs, and alert when something looks wrong.

"Monitoring detects. Rules advise. Guardrails suggest. None of them prevent. When a wrong AI action reaches the real world, it is already too late."

In financial services, healthcare, and defense, a wrong AI action is not recoverable. An unauthorized payment, a deleted record, a leaked data set — these are not log entries. They are consequences. The question is not whether the AI intended to cause harm. The question is whether the system allowed it to act.

VERIDACT does not monitor. It does not advise. It sits between the AI and the consequence boundary and makes unauthorized execution mechanically impossible.

VERIDACT vs. everything else

Guardrails observe. VERIDACT enforces.

Every AI governance product on the market today operates in the same layer: between the model and its output. They read the model's reasoning, score it for risk, and — if it looks dangerous — suggest the model reconsider. This is behavioral compliance. It works until the model is wrong, misled, or manipulated.

VERIDACT operates in a different layer entirely: between the model's output and the real-world system it wants to reach. By the time a request reaches VERIDACT, the model has already reasoned. VERIDACT does not care what the model thinks. It enforces whether the model has the authority to act.

Capability
Guardrails, monitors, AI controls
VERIDACT
Prevents unauthorized execution
Advises the model to decline. Cannot prevent if the model does not comply.
Structurally impossible without a signed authority token. Model compliance is irrelevant.
Works if the model is manipulated
No. A manipulated model bypasses behavioral controls. Prompt injection, adversarial inputs, jailbreaks all defeat guardrails.
Yes. The authority boundary is enforced at the infrastructure layer. The model's reasoning state does not affect it.
Cryptographic proof of every decision
No. Audit logs exist but are not cryptographically signed or independently verifiable.
Yes. Every enforcement event is Ed25519-signed, sequenced, and verifiable offline without running any VERIDACT service.
Hardware authority anchor
No. Keys and credentials live on servers or in cloud key management systems.
Yes. The private signing key lives on a USB device the customer controls. Remove it and the system fails closed immediately.
Fail-closed on unknown state
Depends on implementation. Most systems fail open or alert on unknown state.
Always. Unknown state resolves to no execution. This is a mechanical invariant, not a configuration option.
Customer owns the keys
No. The vendor holds or manages the keys that govern access and policy.
Yes. The private key exists only on the customer's USB device. VERIDACT never holds it and has no access after delivery.
Works without the vendor
No. If the vendor's service goes down, enforcement goes down with it.
Yes. VERIDACT is self-hosted in the customer's environment. The vendor is not in the execution path after deployment.
The distinction is architectural, not a matter of degree. Guardrails are installed inside the AI. VERIDACT is installed between the AI and the world.
What makes it different

Three properties no behavioral system can replicate.

01 — HARDWARE
USB AUTHORITY ANCHOR
The Ed25519 signing private key lives on a physical USB device the customer controls. It never touches a server, a filesystem, or memory outside the device.
Remove the USB. System fails closed in under 2 seconds. Every agent. Every request.
02 — CRYPTOGRAPHY
TAMPER-EVIDENT PROOF CHAIN
Every enforcement decision is Ed25519-signed, SHA256-hashed, and sequenced into a chain. Any event can be verified offline without running any VERIDACT service.
A proof document is downloadable for every event. Verifiable by any auditor independently.
03 — INVARIANT
FAIL-CLOSED BY DESIGN
Unknown state always resolves to no execution. The system cannot be configured, reasoned, or manipulated into an open state. Failure means closed, always.
Six global invariants enforced mechanically — not by policy, not by convention.
How it works

A single enforced path from proposal to execution.

Every agent request enters through the Front Door — an identity and ingress gate. Unknown actors are intercepted by the FDI Sentinel and routed to quarantine. Known agents proceed to policy evaluation. The enforcer issues a decision. Only a signed authority token from the Issuer — whose private key lives on the USB — permits execution.

Request path — every enforcement event follows this sequence
AI AGENT
Proposes action
FRONT DOOR
Identity & ingress
ENFORCER
Policy evaluation
AUTHORITY BOUNDARY
USB-anchored signing
EXECUTION
Only if authorized
25 services — enforcement, evidence, sentinel, quarantine, IAM, policy, conversion
Ed25519 signatures on every event
Fail-closed at every boundary
Who it is for

Organizations where a wrong AI action is not recoverable.

FINANCIAL SERVICES
Payment execution, trade submission, account modification, fraud response. Regulators are asking how AI execution is governed. Behavioral controls are not a satisfying answer to that question.
OCC SR 11-7 · Model risk governance · AI execution controls
HEALTHCARE
Clinical decision support, medication orders, patient record access. AI execution without a provable authority record creates liability that cannot be defended in examination.
HIPAA · FDA AI guidance · Clinical accountability
DEFENSE
Autonomous systems, logistics, procurement, communications. Authority must be cryptographically provable, hardware-anchored, and revocable by physical removal of a device.
Zero-trust architecture · Hardware-anchored authority · Audit trail

Request a deployment conversation.

Not a demo. Not a trial. A conversation about deploying a sovereign authority node in your environment. Your keys. Your infrastructure. Your authority boundary. VERIDACT has no access to your environment after delivery.

Contact the founder directly
moe@veridact.co
VERIDACT is self-hosted in your AWS environment. Each deployment produces a signed provisioning receipt and a signed environment parity receipt. After delivery, the private signing key exists only on the USB device in your possession. VERIDACT has no ongoing access to your environment.