CrowdStrike & NVIDIA’s open source AI gives enterprises the edge against machine-speed attacks
Every SOC leader knows the feeling: drowning in alerts, blind to the real threat, stuck playing defense in a war waged at the speed of AI.
Now CrowdStrike and NVIDIA are flipping the script. Armed with autonomous agents powered by Charlotte AI and NVIDIA Nemotron models, security teams aren't just reacting; they're striking back at attackers before their next move. Welcome to cybersecurity's new arms race. Combining open source's many strengths with agentic AI will shift the balance of power against adversarial AI.
CrowdStrike and NVIDIA's agentic ecosystem combines Charlotte AI AgentWorks, NVIDIA Nemotron open models, NVIDIA NeMo Data Designer synthetic data, NVIDIA Nemo Agent Toolkit, and NVIDIA NIM microservices.
"This collaboration redefines security operations by enabling analysts to build and deploy specialized AI agents at scale, leveraging trusted, enterprise-grade security with Nemotron models," writes Bryan Catanzaro, vice president, Applied Deep Learning Research at NVIDIA.
The partnership is designed to enable autonomous agents to learn quickly, reducing risks, threats, and false positives. Achieving that takes a heavy load off SOC leaders and their teams, who fight data fatigue nearly every day due to inaccurate data.
The announcement at GTC Washington, D.C., signals the arrival of machine-speed defense that can finally match machine-speed attacks.
Transforming elite analyst expertise into datasets at machine scale
The partnership is differentiated by how the AI agents are designed to continually aggregate telemetry data, including insights from CrowdStrike Falcon Complete Managed Detection and Response analysts.
"What we're able to do is take the intelligence, take the data, take the experience of our Falcon Complete analysts, and turn these experts into datasets. Turn the datasets into AI models, and then be able to create agents based on, really, the whole composition and experience that we've built up within the company so that our customers can benefit at scale from these agents always," said Daniel Bernard, CrowdStrike's Chief Business Officer, during a recent briefing.
Capitalizing on the strengths of the NVIDIA Nemotron open models, organizations will be able to have their autonomous agents continually learn by training on the datasets from Falcon Complete, the world's largest MDR service handling millions of triage decisions monthly.
CrowdStrike has previous experience in AI detection triage to the point of launching a service that scales this capability across its customer base. Charlotte AI Detection Triage, designed to integrate into existing security workflows and continuously adapt to evolving threats, automates alert assessment with over 98% accuracy and cuts manual triage by more than 40 hours per week.
Elia Zaitsev, CrowdStrike's chief technology officer, in explaining how Charlotte AI Detection Triage is able to deliver that level of performance, told VentureBeat: "We wouldn't have achieved this without the support of our Falcon Complete team. They perform triage within their workflow, manually addressing millions of detections. The high-quality, human-annotated dataset they provide is what enabled us to reach an accuracy of over 98%."
Lessons learned with Charlotte AI Detection Triage directly apply to the NVIDIA partnership, further increasing the value it has the potential to deliver to SOCs who need help dealing with the deluge of alerts.
Open source is table stakes for this partnership to work
NVIDIA's Nemotron open models address what many security leaders identify as the most critical barrier to AI adoption in regulated environments, which is the lack of clarity regarding how the model works, what its weights are, and how secure it is.
Justin Boitano, Vice President, Enterprise and Edge Computing at NVIDIA, speaking for NVIDIA during a recent press briefing, explained: "Open models are where people start in trying to build their own specialized domain knowledge. You want to own the IP ultimately. Not everybody wants to export their data, and then sort of import or pay for the intelligence that they consume. A lot of sovereign countries, many enterprises in regulated industries want to maintain all that data privacy and security."
John Morello, CTO and co-founder of Gutsy (now Minimus), told VentureBeat that "the open-source nature of Google's BERT open-source language model allows Gutsy to customize and train their model for specific security use cases while maintaining privacy and efficiency." Morello emphasized that practitioners cite "more transparency and better assurances of data privacy, along with great availability of expertise and more integration options across their architectures, as key reasons for going with open source."
Keeping adversarial AI's balance of power in check
Cisco's DJ Sampath, senior vice president of Cisco's AI software and platform group, articulated the industry-wide imperative for open-source security models during a recent interview with VentureBeat: "The reality is that attackers have access to open-source models too. The goal is to empower as many defenders as possible with robust models to strengthen security."
Sampath explained that when Cisco released Foundation-Sec-8B, their open-source security model, at RSAC 2025, it was driven by a sense of responsibility: "Funding for open-source projects has stalled, and there is a growing need for sustainable funding sources within the community. It is a corporate responsibility to provide these models while enabling communities to engage with AI from a defensive standpoint."
The commitment to transparency extends to the most sensitive aspects of AI development. When concerns emerged about DeepSeek R1's training data and potential compromise, NVIDIA responded decisively.
As Boitano explained to VentureBeat, "Government agencies were super concerned. They wanted the reasoning capabilities of DeepSeek, but they were a little concerned with, obviously, what might be trained into the DeepSeek model, which is what actually inspired us to completely open source everything in Nemotron models, including reasoning datasets."
For practitioners managing open-source security at scale, this transparency is core to their companies. Itamar Sher, CEO of Seal Security, emphasized to VentureBeat that "open-source models offer transparency," though he noted that "managing their cycles and compliance remains a significant concern." Sher's company uses generative AI to automate vulnerability remediation in open-source software, and as a recognized CVE Naming Authority (CNA), Seal can identify, document, and assign vulnerabilities, enhancing security across the ecosystem.
A key partnership goal: bringing intelligence to the Edge
"Bringing the intelligence closer to where data is and decisions are made is just going to be a big advancement for security operations teams around the industry," Boitano emphasized. This edge deployment capability is especially critical for government agencies with fragmented and often legacy IT environments.
VentureBeat asked Boitano how the initial discussions went with government agencies briefed on the partnership and its design goals before work began. "The feeling across agencies that we've talked to is they always feel like, unfortunately, they're behind the curve on these technology adoption," Boitano explained. "The response was, anything you guys can do to help us secure the endpoints. It was a tedious and long process to get open models onto these, you know, higher side networks."
NVIDIA and CrowdStrike have done the foundational work, including STIG hardening, FIPS encryption, air-gap compatibility, and removing the barriers that delayed open-model adoption on higher-side networks. The NVIDIA AI Factory for Government reference design provides comprehensive guidance for deploying AI agents in federal and high-assurance organizations while meeting the strictest security requirements.
As Boitano explained, the urgency is existential: "Having AI defense that's running in your estate that can search for and detect these anomalies, and then alert and respond much faster, is just the natural consequence. It's the only way to protect against the speed of AI at this point."
