Press "Enter" to skip to content

Deloitte’s Cyber AI Leader on Cybersecurity’s New Arms Race

As generative and agentic AI rapidly embed themselves into the fabric of business operations, cybersecurity teams face a paradox. The very technologies that promise to supercharge productivity and automate complex workflows are also dramatically expanding the attack surface – and attackers are arming themselves with the same AI tools.

“The adversaries are already leveraging the technology to improve and hasten their ability to gain access and infiltrate and do nefarious things,” said Mark Nicholson, U.S. cyber AI leader at Deloitte, in an interview with The AI Innovator. “Cyber professionals who are trying to defend organizations are perennially at an asymmetric disadvantage and so it is incumbent upon them to leverage any tools that they can to improve their ability to defend their organizations.”

That dual-use nature of AI is forcing enterprises to rethink their entire security posture. As boards and C-suite executives push to accelerate AI adoption for efficiency, workforce transformation and productivity, the need to secure data and systems intensifies. For example, a generative AI-enabled chatbot will be able to answer a customer’s questions with more depth – but to do so means giving the chatbot broader data access. This access needs to be secured, catapulting cyber issues to the forefront.

“This is an opportunity for cyber, for the first time, to really have an opportunity to accelerate the business ambitions,” Nicholson said. “It’s the first time in my career in cyber − 25 years or so − when we’ve actually had an opportunity to have cyber be an accelerator for what the business is trying to accomplish.”

AI as both threat and defense

The rise of agentic AI − autonomous systems capable of performing complex, multi-step tasks − is already transforming how cybersecurity teams operate. “Leveraging the agents enables a lot of cybersecurity functions if done properly,” he explained.

For years, security orchestration, automation and response (SOAR) systems were cumbersome to build. “Agentic AI now really accelerates the implementation of those types of capabilities, and can even theoretically learn through watching what the human analyst is doing,” he added.

Companies today are experimenting with four dimensions of AI in cybersecurity:

  • Improving the productivity of cybersecurity analysts by leveraging generative and agentic AI
  • Become more efficient at administration of identity and access management
  • Become more effective and efficient in testing the vulnerabilities of their environment through automation
  • Stay on top of vulnerabilities, spot issues earlier and resolve problems a bit quicker than in the past

But the same capabilities are available to attackers. “If groups within the business are developing AI without properly securing them first, there’s a significant protection issue,” he said. Adversaries can also use AI “to find vulnerabilities faster, even write code in some cases that would exploit and create zero-day vulnerabilities and, without a tremendous amount of effort, be able to attack an organization in many cases successfully.”

That AI-versus-AI dynamic, Nicholson argues, is inevitable. Enterprises must assume that cybercriminals, hacktivists and nation-states will weaponize AI − and plan accordingly. “It’s more difficult to defend (the organization) because of everything that’s going on and because of the adversary,” he said, “but it also makes it easier to stay aligned and act at speed.”

Rethinking security from the ground up

Deloitte is working with Google Cloud and other partners to help clients rethink security frameworks for the AI era. Nicholson’s team has deconstructed and reimagined the widely used NIST Cybersecurity Framework to address the realities of AI-driven environments. The goal is to help organizations identify “low-hanging fruit” − specific workflows or inefficiencies where AI can deliver immediate improvements − while embedding robust security controls from the start.

The analysis isn’t just technical. It includes mapping how humans spend their time, pinpointing inefficient processes, and designing step-by-step approaches that build momentum through early wins.

“We want to analyze the workflows, the workloads … not just from a technology workload perspective but also from a human workload perspective: Where are people spending their time and what are the most inefficient processes?” he said. The aim is to “take a very stepwise approach, without trying to do too much at first, to gain a little bit of familiarity and momentum with some use cases that they know they can bring across the table.”

Some companies are even diverting up to half of their managed security services budgets into AI development with the goal of reducing reliance on third-party providers and building more tailored, outcome-driven defenses, according to Nicholson. But this means organizations have to rethink their cybersecurity approaches.

From one-off tools to orchestrated systems

One of the biggest risks Nicholson sees is the temptation to deploy isolated AI tools without a broader orchestration strategy. “If an AI agent is helping a Level 1 security analyst work faster, but it’s not coordinated with identity and access management or vulnerability management processes, you lose a lot of value,” he said. “Let’s get the data layer and the data mesh down properly. Let’s understand our overall objectives so that a CISO has visibility and control.”

He said many enterprises are already experimenting with autonomous “agentic pen testing,” the use of AI agents to probe systems for vulnerabilities with little to no human supervision. Typically, companies use human ethical hackers to simulate attacks to find weaknesses in their systems before real attackers do.

AI agents also can manage and monitor the control environment continuously. “This has been a perennial issue for cyber functions. Controls get in place, but then the ongoing monitoring and the effectiveness of those controls sometimes are a little bit sporadic and difficult to continuously manage,” he said. “We’re seeing some development of AI capabilities that would be able to do that assurance function on a continual basis.”

Identity and access management, a perennial pain point, is also being targeted for disruption. Deloitte is prototyping multimodal AI agents that use computer vision and other techniques to provision and de-provision access across multiple systems − a task that often consumes hundreds of staff hours in large enterprises. “This is a very costly area for a lot of organizations,” he said. “We think this is an area very ripe for disruption.”

From ‘Human in the loop’ to ‘Human on the loop’

As AI systems grow more capable and autonomous, the traditional cybersecurity principle of “human in the loop” − requiring human approval before critical actions − may no longer scale. Nicholson predicts a shift to “human on the loop,” where humans validate and oversee AI decision-making rather than gatekeep it.

There is still transparency, however. “We understand how the AI is making decisions, and we understand the data that’s being accessed, but it’s more of a validation process as opposed to a toll gate that needs to go through a human,” he said. “That’s a very inefficient process and we won’t leverage the full value of AI.”

Looking ahead, Nicholson expects AI to become as foundational to cybersecurity as client-server computing or cloud infrastructure. But he cautions that the stakes are higher this time. The internet was built for connectivity, not security, and the industry spent decades patching its vulnerabilities. The industry must not make the same mistake.

“We’re going to be in an age of AI versus AI,” he said. “There’s a lot at stake here, and we need to be really thoughtful about how we implement the technology.”

Despite the risks, Nicholson is optimistic. “In every case of technological development, it has generally made human life better,” he said. “I don’t think this is going to be different.”

Author

×