Somewhere right now, a financial advisor is using an AI assistant to draft a client communication. A compliance officer is reviewing an AI-generated summary of a regulatory filing. A doctor is approving an AI-drafted patient note. A lawyer is editing an AI-generated contract clause. In each case, the AI produced the content and a human — in theory — reviewed it.

The question that has not yet been definitively answered by courts, regulators, or legal doctrine is: when the AI-generated content turns out to be wrong, harmful, or non-compliant, who is responsible? The answer that is emerging from early case law and regulatory guidance is clearer than many enterprises have prepared for.

The Accountability Gap

Traditional professional liability is built around the concept of a responsible person — the advisor who gave advice, the physician who ordered treatment, the attorney who signed the brief. That person's training, judgment, and review of the specific output is the basis for their liability. They can be deposed. Their reasoning can be examined. Their review of the specific decision can be reconstructed.

AI introduces a new problem: in many workflows, the "responsible person" reviewed the AI's output but may not have meaningfully engaged with it. They approved it quickly, trusting the model. They may not have had the specific expertise to evaluate it. And — critically — there is often no record of what they actually reviewed, how long they spent, or what they confirmed.

The Emerging Legal Standard

Courts and regulators are converging on a simple principle: if a human put their name on it, they are responsible for it — regardless of whether AI generated it. The corollary is that the human must have genuinely reviewed it, and you must be able to prove that they did. "I approved it but I didn't really read it" is not a defense. It is an admission.

Financial Services: The Furthest Along

The financial services industry is furthest ahead in grappling with this question — partly because it is the most heavily regulated and partly because AI-assisted advice is already pervasive. FINRA has issued guidance making clear that registered representatives who use AI tools to generate client communications remain fully responsible for the content of those communications. The supervision requirements that apply to human-generated communications apply equally to AI-generated ones.

What this means in practice: if a registered rep uses an AI tool to draft a suitability analysis and a client later claims the recommendation was inappropriate, the question will not be "what did the AI say?" It will be "what did the registered rep review, what did they confirm, and can you document that review?" A click on an "approve" button with no contextual record of what was reviewed is not going to satisfy FINRA examiners.

Healthcare: The FDA's Position

The FDA's 21 CFR Part 11 framework, which governs electronic records in clinical contexts, has long required controls that ensure records cannot be altered after creation. The FDA's emerging guidance on AI in clinical workflows is consistent with this: the physician who authorizes an AI-assisted clinical recommendation remains responsible for that recommendation, and the authorization must be documented in a way that is tamper-evident.

The practical implication is significant. If a hospital system uses AI to assist with diagnostic recommendations and a patient outcome is poor, the question will be whether the physician who authorized the AI's recommendation did so with genuine review and documented authority. An AI system that logs "physician approved" without capturing what they approved and under what circumstances is not consistent with the FDA's requirements for electronic records in clinical contexts.

The Specific Documentation You Need

Across financial services, healthcare, legal, and any other regulated use of AI, the documentation of human authorization needs to answer five questions:

  • Who — the specific identified person, with their role and authorization level, not just a user ID
  • What — the specific AI output they reviewed, not a category or type of output
  • When — a tamper-evident timestamp at the moment of authorization, not a system clock timestamp that can be altered
  • Context — sufficient information to demonstrate that the review was substantive, including what the reviewer could see and confirm
  • Integrity — a cryptographic seal that makes any post-hoc alteration of the record detectable

Most enterprise AI workflows can answer the first three questions with varying degrees of confidence. Almost none can answer the fourth and fifth in a way that would satisfy a regulator or survive discovery.

What Most Teams Have
  • "User ID 4721 approved document at 14:32:07"
  • Log stored in a mutable database
  • No record of what was actually reviewed
  • No cryptographic link between approval and document
  • No evidence of substantive review
What Regulators Want
  • Named qualified person + role + authorization level
  • Hash of the specific AI output reviewed
  • Cryptographically sealed timestamp
  • Context of review — what they confirmed
  • Tamper-evident record, exportable for discovery

The Organizational Risk Beyond Individual Liability

Individual liability is one risk. Organizational liability is another. When a regulator examines an institution's AI governance program, they are not just looking at whether any individual made a bad decision. They are evaluating whether the institution has adequate systems and controls to ensure that AI is used responsibly. An institution that cannot produce authenticated records of human oversight for its AI-assisted decisions has a systemic control failure — and that failure is itself a basis for regulatory action, independent of whether any specific AI output caused harm.

This is the lesson from the early FINRA guidance on AI supervision: the obligation is not just to supervise individual outputs but to maintain a supervisory system that makes it possible to demonstrate, at any time, that AI is being used in compliance with applicable rules. That system requires, at its foundation, reliable records of human authorization.

The Question Every General Counsel Should Be Asking

For every AI-assisted workflow in your organization: if this output were challenged in litigation or a regulatory examination, could you produce a cryptographically authenticated record showing exactly who reviewed it, what they confirmed, and when? If the answer is no — or "we'd have to reconstruct it from several systems" — you have an exposure. The time to fix it is before the investigation, not during it.