Coreal.
Book a working session →
← INSIGHTS·AI

Bounded AI: agents inside a replay-safe perimeter.

Suggestions, not decisions. Journals over autonomy.

Y
Y. Oliinyk
AI Engineering Lead · Coreal
Apr 06, 202612 min

The compliance problem with autonomous AI

An AI agent that makes autonomous decisions in a regulated financial system creates a problem that is not primarily technical. It is a governance problem. A regulator who asks 'why was this transaction approved?' needs a human-traceable answer. An AI that made the decision autonomously cannot provide one — not in the sense that a regulator will accept.

Coreal's answer to this is bounded AI: agents that operate inside a replay-safe perimeter, with human gates at every decision point that affects customer funds, customer data, or regulatory status. The AI suggests. A human approves. The approval is journaled.

The replay-safe requirement

Replay safety means that any action taken by the AI can be reconstructed deterministically from the event log. If an AI agent recommends approving a case, the event log records: the case state at time of recommendation, the AI's reasoning summary, the human approval action, and the outcome. This journal is immutable — it cannot be edited, only appended to.

This design makes the AI system auditable in the same way that the BPM engine is auditable: you can replay any case from its initial state and see every decision point, every actor, every timestamp. The AI is not a black box — it is a well-documented participant in a workflow.

What we actually use AI for

Case triage: routing inbound compliance cases to the right analyst based on case type, jurisdiction, and analyst workload. Risk scoring: generating a risk summary for a transaction that requires manual review, with the KYT result, account history, and peer comparison surfaced in structured form. Document extraction: pulling structured data from uploaded KYC documents (passports, utility bills, company filings) and pre-filling the review form.

In all three cases, the AI is making the human's work faster, not replacing it. The override rate — cases where a human reviewer disagrees with the AI's recommendation — is 7%. That is a healthy number: low enough that the AI is useful, high enough that humans are actually reviewing.

MORE INSIGHTS
← Back to all notes
Book a working session →