Harvey AI Tackles Legal Industry’s Biggest AI Problem – Ethical Walls
Zach Anderson
Mar 12, 2026 17:24
Harvey AI addresses information barrier enforcement for autonomous legal agents, partnering with Intapp to prevent confidential data leakage across client matters.
Legal AI startup Harvey is sounding the alarm on what it calls “the most important unsolved problem” in its industry: preventing autonomous AI agents from accidentally breaching the information barriers that keep law firms out of malpractice court.
The company published a detailed technical framework on March 12, 2026, outlining how traditional ethical wall enforcement breaks down when AI agents—rather than human lawyers—start autonomously accessing firm document systems.
Why Chatbots Were Easy, Agents Are Hard
The shift from simple legal chatbots to what Harvey calls “long horizon agents” creates three fundamental problems that existing compliance systems weren’t built to handle.
First, agents access documents directly. When an AI autonomously pulls 50 documents from a firm’s document management system to review an acquisition agreement, it’s making retrieval decisions without human oversight. If one of those documents sits behind an ethical wall? The breach happens before anyone knows to stop it.
Second, agents remember things. Unlike stateless chatbot sessions, long horizon agents maintain context across weeks of work on complex deals. If an agent picks up confidential information while working on Matter A, then gets assigned to Matter B on the opposite side of a conflict, that prior context contaminates the new work. Current ethics rules are clear: this is a violation.
Third, agents work too fast to monitor manually. A junior associate reviews maybe 50 documents daily. An agent processes hundreds in minutes. The supervising partner sees outputs, not the thousands of intermediate steps that produced them.
The Stakes for Law Firms
Harvey doesn’t mince words about consequences. Courts can disqualify entire firms from matters over ethical wall failures. Clients bring malpractice claims. State bars impose disciplinary sanctions. The reputational damage alone can torch a firm’s most valuable client relationships.
Most Am Law 200 firms currently manage walls through Intapp’s conflicts checking system, iManage or NetDocuments access controls, and old-school measures like separate floors and restricted email groups. These work because the boundaries are clear—documents live in folders, people have access lists, and firms can restrict access at every point.
Autonomous agents obliterate that clarity.
Harvey’s Technical Fix
The company announced a partnership with Intapp to integrate ethical wall enforcement directly into its AI platform. The approach centers on three principles.
Every agent operation gets scoped to a specific client matter as a hard security boundary, not just a metadata tag. When Intapp flags a wall between Matter A and Matter B, Harvey’s system enforces that wall at the document retrieval layer, the context layer, and the output layer simultaneously.
Critically, the system “fails closed” rather than open. If an agent can’t confirm a document falls within its authorized boundary, it skips that document and flags the uncertainty. Work product might be less complete, but the ethical wall stays intact.
Every document access, context window, and agent session gets logged at a level of detail sufficient to prove in court—after the fact—that no walled information was accessed.
The Competitive Landscape Heats Up
Harvey isn’t alone in pushing legal AI toward autonomous agents. LegalOn launched five specialized AI agents for in-house legal teams on February 10, 2026, targeting tasks like playbook creation and contract translation. The broader industry is racing toward what researchers call agents that can “own more of the end-to-end lifecycle” of legal work.
But Harvey’s framing suggests firms moving fastest with AI face the greatest exposure. Any law firm piloting agents without auditable ethical wall enforcement is “creating discoverable evidence of inadequate screening procedures,” the company warns. The productivity gains become worthless if they come with malpractice exposure attached.
For firms evaluating AI vendors, Harvey’s message is blunt: if your platform can’t precisely explain how wall enforcement works at both the application and data layers, it’s not ready for sensitive work.
Image source: Shutterstock
