Pentagon vendor cutoff exposes the AI dependency map most enterprises never built

Pentagon vendor cutoff exposes the AI dependency map most enterprises never built



The federal directive ordering all U.S. government agencies to cease using Anthropic technology comes with a six-month phaseout window. That timeline assumes agencies already know where Anthropic’s models sit inside their workflows. Most don’t today.

Most enterprises wouldn’t, either. The gap between what enterprises think they’ve approved and what’s actually running in production is wider than most security leaders realize.

AI vendor dependencies don't stop at the contract you signed; they cascade through your vendors, your vendors' vendors, and the SaaS platforms your teams adopted without a procurement review. Most enterprises have never mapped that chain.

The inventory nobody has run

A January 2026 Panorays survey of 200 U.S. CISOs put a number on the problem: Only 15% said they have full visibility into their software supply chains, up from just 3% a year ago. And 49% had adopted AI tools without employer approval, according to a BlackFog survey of 2,000 workers at companies with more than 500 employees; 69% of C-suite members said they were fine with it.

That’s where undocumented AI vendor dependencies accumulate, invisible to the security team until a forced migration makes them everyone’s problem.

“If you asked a typical enterprise to produce a dependency graph that includes second- and third-order AI calls, they’d be building it from scratch under pressure,” said Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, in an exclusive interview with VentureBeat. “Most security programs were built for static assets. AI is dynamic, compositional, and increasingly indirect.”

When a vendor relationship ends overnight

The directive creates a forced migration unlike anything the federal government has attempted with an AI provider. Any enterprise running critical workflows on a single AI vendor faces the same math if that vendor disappears.

Shadow AI incidents now account for 20% of all breaches, adding as much as $670,000 to average breach costs, IBM’s 2025 Cost of Data Breach Report found. You can’t execute a transition plan for infrastructure you haven’t inventoried.

Your contract with Anthropic may not exist, but your vendors' contracts might. A CRM platform could have Claude embedded in its analytics engine. A customer service tool might call it on every ticket you process. You didn't sign for that exposure, but you inherited it, and when a vendor cutoff hits upstream, it cascades downstream fast. The enterprise at the end of that chain doesn't know the dependency exists until something breaks or the compliance letter shows up.

Anthropic has said eight of the 10 largest U.S. companies use Claude. Any organization in those companies’ supply chains has indirect Anthropic exposure, whether they contracted for it or not. AWS and Palantir, which hold billions in military contracts, may need to reassess their commercial relationships with Anthropic to maintain Pentagon business.

The supply chain risk designation means any company doing business with the Pentagon now has to prove its workflows don’t touch Anthropic.

“Models are not interchangeable,” Baer told VentureBeat. “Switching vendors changes output formats, latency characteristics, safety filters, and hallucination profiles. That means revalidating controls, not just functionality.”

She outlined a sequence that starts with triage and blast radius assessment, moves to behavioral drift analysis, and ends with credential and integration churn. “Rotating keys is the easy part,” Baer said. “Untangling hardcoded dependencies, vendor SDK assumptions, and agent workflows is where things break.”

The dependencies your logs don't show

A senior defense official described disentangling from Claude as an “enormous pain in the ass,” according to Axios. If that’s the assessment inside the most well-resourced security apparatus on the planet, the question for enterprise CISOs is straightforward. How long would yours take?

The shadow IT wave that followed SaaS adoption taught security teams about unsanctioned technology risk. Most caught up. They deployed CASBs, tightened SSO, and ran spend analysis. The tools worked because the threat was visible. A new application meant a new login, a new data store, a new entry in the logs.

AI vendor dependencies don’t leave those traces.

“Shadow IT with SaaS was visible at the edges,” Baer said. “AI dependencies are embedded inside other vendors’ features, invoked dynamically rather than persistently installed, non-deterministic in behavior, and opaque. You often don’t know which model or provider is actually being used.”

Four moves for Monday morning

The federal directive didn’t create the AI supply chain visibility problem. It exposed it.

“Not ‘inventory your AI,’ because that’s too abstract and too slow,” Baer told VentureBeat. She recommended four concrete moves that a security leader can execute in 30 days.

Map execution paths, not vendors. Instrument at the gateway, proxy, or application layer to log which services are making model calls, to which endpoints, with what data classifications. You’re building a live map of usage, not a static vendor list.

Identify control points you actually own. If your only control is at the vendor boundary, you’ve already lost. You want enforcement at ingress (what data goes into models), egress (what outputs are allowed downstream), and orchestration layers where agents and pipelines operate.

Run a kill test on your top AI dependency. Pick your most critical AI vendor and simulate its removal in a staging environment. Kill the API key, monitor for 48 hours, and document what breaks, what silently degrades, and what throws errors your incident response playbook doesn’t cover. This exercise will surface dependencies you didn’t know existed.

Force vendor disclosure on sub-processors and models. Your AI vendors should be able to answer which models they rely on, where those models are hosted, and what fallback paths exist. If they can’t, that’s your fourth-party blind spot. Ask the questions now, while the relationship is stable. Once a cutoff hits, the leverage shifts, and the answers come too late.

The control illusion

“Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system,” Baer told VentureBeat. “The real dependencies are one or two layers deeper, and those are the ones that fail under stress.”

The federal directive against Anthropic is one organization’s weather event. Every enterprise will eventually face its own version, whether the trigger is regulatory, contractual, operational, or geopolitical. The organizations that mapped their AI supply chain before the storm will recover. The ones that didn’t will scramble.

Map your AI vendor dependencies to the sub-tier level. Run the kill test. Force the disclosure. Give yourself 30 days. The next forced migration won’t come with a six-month warning.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest