When the EU AI Act entered into force in August 2024, enterprise compliance teams began a familiar exercise: map our AI use cases, assess risk levels, document our models. This is the right instinct. But there's a significant portion of the Act's requirements that is receiving almost no attention — and it's the part that will drive the most enforcement actions in regulated industries.

The Act's provisions on human oversight don't just require that humans can intervene in AI systems. They require that you can prove they did. For high-risk AI applications — which includes most AI used in financial services, hiring, credit, healthcare, and critical infrastructure — the documentation requirements go far beyond what most companies have built.

The Two Compliance Problems Most Teams Are Solving

Ask a compliance officer what the EU AI Act requires and you'll typically hear two things: first, a conformity assessment for high-risk systems; second, some form of bias and accuracy testing. These are real requirements. They are also the easier requirements, because they are largely one-time or periodic exercises. You build a documentation package. You do an assessment. You file it.

What most teams are not adequately addressing is the ongoing requirement — the real-time documentation of human oversight that the Act demands for each significant AI-assisted decision. Article 14 doesn't just require that a human is "in the loop." It requires that the human oversight is meaningful, that it can be demonstrated, and that records of it are maintained.

Article 14 — Human Oversight

The EU AI Act requires that high-risk AI systems be designed and developed in such a way that they can be "effectively overseen by natural persons during the period in which the AI system is in use." This is not a policy requirement. It is an operational and documentation requirement. You must be able to show, for each consequential AI action, that a qualified human reviewed and authorized it.

What "Meaningful" Human Oversight Actually Requires

The Act draws a distinction — implicitly but clearly — between rubber-stamp oversight and meaningful oversight. A compliance program where humans are technically "in the loop" but approve 400 AI-generated documents per day with two seconds of review per document is not what the Act envisions. And regulators enforcing the Act will ask pointed questions about whether your oversight was genuine.

Meaningful oversight under the Act requires three things that most AI systems don't currently document: the identity of the specific human who reviewed the AI output, evidence that the review was substantive and not automated, and a record that links the human's authorization to the specific AI output it covers. Generic "a human approved this" logs don't satisfy this.

The Identity Problem

Your AI platform may log that "a user" approved an output. But the Act's enforcement framework requires that you can identify the specific qualified person who exercised oversight — their role, their authorization level, and whether they had the competence to evaluate the AI output they were approving. This means authorization records need to be tied to specific individuals with documented competence, not just user accounts.

The Substantive Review Problem

If your oversight mechanism is a "confirm" button that takes two seconds to click, you don't have meaningful oversight. You have the appearance of oversight. Regulators will look at the time between AI output generation and human authorization. They will look at whether the human had access to sufficient information to evaluate the output. A cryptographic record that includes the context of the review — what the human saw, what they confirmed, how long it took — is far more defensible than a click timestamp.

The Linkage Problem

The most common gap is what we call the linkage problem. A company may have AI usage logs and separately may have approval records in a workflow system. But can they demonstrate, for a specific AI output in a specific client interaction on a specific date, exactly which human reviewed it and what they confirmed? If the answer requires reconstructing the chain from multiple systems that don't cryptographically link to each other, the audit will be uncomfortable.

Typical Current State
  • AI usage logs in one system
  • Approval records in a separate workflow tool
  • No cryptographic link between the two
  • Generic user IDs, not qualified-person records
  • Timestamps from mutable system clocks
EU AI Act Ready
  • Single canonical record per AI action
  • Cryptographic link: AI output → human authorization
  • Named qualified person with documented role
  • Tamper-evident timestamp at moment of authorization
  • Exportable audit bundle for regulatory submission

The Penalty Calculus

The EU AI Act's penalty structure for non-compliance with obligations for high-risk systems is up to €30 million or 6% of global annual turnover, whichever is higher. For most enterprises, this is not an abstract number. And unlike GDPR, where early enforcement was slow and fines were often negotiated down, the EU is signaling aggressive enforcement of AI Act provisions, particularly for financial services and healthcare applications.

The first enforcement actions will almost certainly target high-risk use cases where the documentation trail is weakest — AI in credit decisions, AI in hiring, AI in clinical workflows. If you operate in these spaces and your human oversight documentation is a set of workflow logs and click records, you are a high-value enforcement target.

What to Build Now

The companies that will be in the strongest position when the first EU AI Act enforcement actions arrive are those that have built the authorization record at the point of the AI action — not reconstructed after the fact. This means instrumenting your AI workflows to capture, at the moment of human review: the identity and role of the reviewer, the specific AI output being authorized, the context in which the review occurred, and a cryptographic seal on the entire package.

This is not a compliance documentation project. It is an infrastructure project. The good news is that it is a solved infrastructure problem — one that doesn't require you to change how your workflows operate, only to add a signing layer to what already happens.

The Compliance Gap That Will Drive Enforcement

Most enterprises will pass their initial EU AI Act conformity assessments. The enforcement actions will come from the ongoing human oversight documentation requirement — the real-time, per-decision record that proves meaningful human review of each significant AI action. If you can't produce a cryptographically authenticated record linking a specific qualified person to a specific AI output, you are not yet compliant. The window to build that infrastructure before the first enforcement wave is closing.