Keeping pace with the volume of cyber attacks can be an overwhelming task for many organizations. The advent of generative AI and AI agents as new tools for defense will help, but they will also add complexity to cybersecurity systems. Choosing the right tool is getting trickier as well: The number of AI solutions is growing, crowding an already fragmented market.
The AI Innovator recently caught up with Chris Knackstedt, a managing director in Deloitte’s Cyber AI and Automation practice, to discuss the changing cybersecurity landscape. His advice to companies is this: Even if the cyber landscape is in flux, don’t forget where you came from.
What follows is an edited version of that conversation.
The AI Innovator: You see an expanding threat landscape. Can you share some statistics so we can see more precisely how the situation is changing?
Chris Knackstedt: The threat landscape is continuously changing, and it can be challenging for organizations to contend with. In fact, Deloitte’s U.S. data cuts from the Future of Cyber survey found that 43% of respondents stated that keeping pace with these evolving threats is one of the top strategic challenges within their cyber programs — and almost half of respondents have reported at least one cyber breach within the last year.
However, organizations are starting to realize the benefits AI can play at mitigating those risks. In that same survey, 43% of respondents stated that they are increasingly using AI capabilities to improve their cybersecurity programs. And in Deloitte’s recent State of the GenAI Q4 report, cybersecurity respondents reported the highest ROI on their gen AI implementations, surpassing their expectations.
How do you see these trends evolving in the next 5 to 10 years?
Just as the last five to 10 years have produced continuous growth and complexity of cyber risk, with the advent and growth of cloud, service-oriented technology architectures, IoT, increased internal and third-party system integrations and so on, the next five to 10 years are poised to continue this trend.
However, new and emerging technology trends will be introducing even more complexity into cybersecurity as new solutions around Generative AI, AI agent-based technology architectures and Business as a Service continue to mature and proliferate across businesses and institutions. These emerging technologies are advancing at a rapid pace, and as a result the risk landscape is still very much unsettled and unknown.
Ultimately, businesses and organizations will need to continue taking on responsibility around safe adoption and use of these technologies as the risk and regulatory landscape around transformational technologies, such as AI, continue to unfold and materialize.
What are the biggest challenges organizations face in implementing these solutions?
In general, we see the biggest challenges in adopting AI across organizations, and not just in cyber, to include the following:
AI literacy: Many of these new AI technologies represent new ways of looking at and addressing business problems, which require enhanced understanding and awareness across the enterprise. In cyber, for instance, leveraging AI for threat detection is a large shift from traditional, signature-based detection capabilities.
Security organizations must build competency around how AI analyzes and processes data and how model outputs should be interpreted and actioned upon in the context of cyber incident response and remediation.
Human-in-the-loop process development: Adopting AI and automation requires organizations to build and re-engineer operating models and processes to integrate the outputs and insights into business operations. Organizations need to be aware of the capabilities and constraints of AI technologies as well as the intended functions that these AI tools are meant to serve. From there, processes need to be altered and conformed to leverage the AI outputs without over-relying on them. No AI system is fully autonomous; it still requires human supervision and intervention.
AI capability and technology rationalization: The technology market for AI solutions has grown significantly over the past 24 months. With the advent of generative AI and other AI breakthroughs in machine learning and deep learning, many new and legacy technology vendors are offering AI capabilities across their solutions and platforms.
This is leading to complexities and conflation for organizations as they rationalize these new AI capabilities against the business problems and use cases they are looking for AI to help them solve. This issue is particularly complex in cybersecurity, where the technology market is highly fragmented and many large organizations are using dozens of these technologies in order to secure their digital enterprise.
Data management or data protection: The old adage of ‘garbage-in-garbage-out’ continues to hold true in the new age of AI solutions but this compounds even more as the data going in and out of systems includes the information entered and inferenced by gen AI models.
The user communities for these new gen AI solutions have expanded to essentially the entire organization, where in previous AI organizations, only data scientists, engineers and experienced users were given this type of input access to AI models.
This new AI audience greatly extends the need for good data management capabilities to ensure that the data being used to train and tune models as well as the high volumes of data being entered by users to inference these models are moderated and secured.
Can you explain how private LLMs and agentic AI architectures will be used in cybersecurity?
Both private LLMs and agentic AI architectures will find a wide variety of utility across cybersecurity, but in general, these new solution paradigms will help organizations realize deeper value from AI by way of their ability to enable customization and tailoring of AI to unique enterprise circumstances and organizations.
Private LLMs, both hosted and used by a single organization, will be tailored and fine-tuned with organization-specific domain knowledge and trade craft in order to answer specific questions about the enterprise risk posture and conduct detailed knowledge extraction across a myriad of data and IT systems managed by the company. Using Retrieval Augmented Generation (RAG) and enterprise knowledge graphs, private LLMs can learn how cybersecurity risks and measures are identified and instituted within a unique organization and can continue to grow and mature with the organization.
Agentic AI architectures will allow for tailored intelligent automation across cybersecurity processes, workflows and playbooks, extending far beyond traditional security orchestration automation and response (SOAR) solutions. Agentic AI will provide task-specific AI functions that can be used to fashion together playbooks for risk identification, risk management and triage and risk remediation.
What specific types of cybercrime-as-a-service offerings do you see gaining traction, and why?
Traditional MSSP (Managed Security Service Provider) services will continue to grow and expand, providing more support to Security Operations Centers and other operational cyber functions. Advancements in AI enablers within and across these services will lead to a higher degree of specialization and tailoring that MSSP providers will be able to offer in these areas, including aligning more seamlessly with client technology stacks and operating structures. Generative AI will be able to produce higher client interaction and response as gen AI chatbots and knowledge extraction engines are used to assist with client-targeted, threat-specific inquiries and research.
AI and gen AI will also enable more tailored CaaS services over time. Cyber domains like threat intel, threat hunting, application security and cyber analytics services will become more commonly offered on the back of traditional MSSP services, as more specific solutions are built and delivered on the back of the core technology and data ecosystems being managed by the MSSP providers. These services will become more and more tailored to specific organizations and threats as CaaS providers continue to drive efficiency on the back of AI and automation.
How exactly will gen AI be used to transform service delivery in IAM systems?
Gen AI has the promise to – and is already starting to enhance in many ways – the capabilities of IAM (Identity and Access Management) solutions. Advancements in gen AI solutions for text recognition, voice recognition, image and facial recognition, will continue to enhance multimodal authentication and authorization at the initial point of access and throughout user access sessions.
Gen AI’s ability to analyze real-time unstructured data will lead to advancements in IAM capabilities for continuous authentication and authorization in real-time, in reaction to user behavior historically and during that session. Gen AI will also continue to revolutionize IAM administration and auditing capabilities by assisting, and in some cases automating, user access lifecycle and role remediation in an ongoing and persistent manner — responding and reacting to changing user business alignments and behaviors along with the expanding and changing enterprise solution landscape.
These gen AI-enhanced IAM systems will not only be used to track employee and system access but will also be required to manage and monitor access for ‘AI/Model Identities’ as well. As AI continues to mature and proliferate, they will continue to take on new and expanded roles within an organization, with distinct decision-making and access rights associated with those roles.
Just as these roles change over time for human employees, the same will be true for these AI employees, although potentially at a much more rapid pace – perhaps even as fast as an organization’s agile software development processes. Having intelligent IAM systems will help with monitoring these AI agents as they integrate into enterprise technology ecosystems.
What’s the most important advice you would give to organizations looking to prepare for these advancing cyber threats?
Don’t forget where you came from. While emerging technologies, including AI, are presenting new and novel risks, one shouldn’t feel that they need to rebuild cybersecurity within your organization. Instead, lean on the good cyber hygiene that your organization has built and established over time and identify how these new AI risks and threat vectors will impact your standing policies, controls and monitoring capabilities.
This includes standing IT enablement and governance functions, technology adoption and development lifecycles, employee technology training and literacy programs, and ongoing enterprise visibility, monitoring and incident response capabilities. Don’t build new capabilities for the sake of it, but rather take stock of what you have, and augment or extend those capabilities first.
Be First to Comment