TLDR
- The biggest LLM security risk is not the model itself but ungoverned human behavior, with “shadow AI” emerging as a major threat as employees use unauthorized tools and upload sensitive data, according to Tamara Chacon, AI security strategist with Cisco.
- Most AI-related breaches still start with familiar weaknesses such as phishing and credential theft, now amplified by generative AI’s ability to scale attacks, according to industry warnings from Gartner and Microsoft.
- Securing LLMs requires applying proven cybersecurity fundamentals to the AI supply chain – including data provenance, monitoring and baselines – an approach echoed in guidance from National Institute of Standards and Technology, with humans remaining accountable for decisions.
As enterprises embed large language models deeper into core business systems, securing those models is becoming one of the most urgent and misunderstood challenges in artificial intelligence, according to Tamara Chacon, an AI security strategist with SURGe by Cisco Foundation AI.
Rather than focusing solely on model architectures or nation-state hacking, organizations need to start with how people use LLMs, how models are trained, and what data flows into and out of those systems, Chacon said in an interview with The AI Innovator.
“When it comes to securing LLMs, you still have to go down to the people side of things,” Chacon said. As AI popularity surges, employees are tempted to experiment with generative AI tools outside approved IT and security controls, creating what she described as one of the largest emerging threats to enterprise LLM deployments.
It’s a phenomenon Chacon calls “shadow AI,” a twist on ‘shadow IT’ that occurs when employees do things like unthinkingly upload proprietary data into an AI assistant, “not really realizing what they’re doing.” That means the AI assistant will now have private data it could share with others.
Industry researchers are seeing the same trend. Gartner has warned that 40% of enterprises will see breaches caused by shadow AI by 2030 while a Microsoft study found that 71% of U.K. workers admitted using AI tools not authorized by their companies, according to IT Pro.
Ungoverned usage is one of the most serious security risks facing LLMs, along with poorly vetted training data and familiar attack techniques that AI has made faster and harder to detect, she said.
Chacon would know. She joined Cisco through its acquisition of Splunk and was one of the founding members of its SURGe research team, which focuses on cybersecurity research and blue-team defense. SURGe now operates alongside Cisco’s Foundation AI group, which builds AI models designed specifically for security use cases.
While concerns about LLM hijacking and foreign adversaries exploiting commercial models have drawn the biggest headlines, Chacon said most enterprise compromises involving AI still begin with the same weaknesses that existed before LLMs entered the workplace: stealing credentials through phishing attacks on unwary employees.
Attackers do not need to break an LLM if they can log in as a legitimate user. Generative AI, she said, is helping attackers scale those efforts. “It does maybe lower the bar for somebody who’s a little less sophisticated,” Chacon said. “It makes that entry level into doing some of that easier.”
At the same time, defenders are using AI to secure LLM environments by automating work that previously overwhelmed security teams. Chacon cited vulnerability management as a clear example.
“Sometimes there would be like 100-plus CVEs that come in a day, maybe 200,” she said, referring to Common Vulnerabilities and Exposures. These are security flaws that have been assigned unique IDs so that team members will be on the same page when they address them.
“Do you have time to just go through all those by hand? That’s a task that you can offload (to AI), and then you have your human counterpart go through it” afterwards to serve as guardrails, Chacon said.
Securing the AI supply chain
Beyond user behavior, one of the most complex challenges in securing LLMs is what Chacon described as the AI supply chain – the data, libraries, pre-trained models and open-source components used to build or deploy AI systems.
“When you’re trying to feed the data to your models, it’s important to look for that kind of breakdown of what’s going into my data,” she said. “What am I ingesting in here that might be poisoned or that might be backdoored?”
Because many LLMs rely on massive volumes of open-source data and third-party components, enterprises may not fully understand the provenance or integrity of what they are training on.
“How do I know that this data I pulled up – it’s all open source – where’s it coming from?” Chacon said. “Is it going to have a little backdoor in here that’s going to throw me off?”
These concerns are increasingly reflected in formal guidance. The National Institute of Standards and Technology’s AI Risk Management Framework emphasizes documenting and tracing the provenance of training data and understanding data sources for transparency and accountability in AI systems.
To mitigate those risks, she said organizations need governance frameworks for LLMs that mirror mature software security practices. That includes vetting data sources, monitoring training pipelines, and establishing baselines for expected model behavior.
“Do you have a baseline, which is something fundamentally you do in cybersecurity? And AI is included in that environment,” she said. That way, it should be easier to spot deviations.
However, those controls become more difficult when models are proprietary or opaque. “Not every model is open, so you’re not going to be able to fully dive in to see what’s normal, to see what’s there,” Chacon said. “There could be other issues depending on where it’s made, who made it. There’s a lot of moving parts.”
Eat your cyber vegetables
Inside Cisco, Chacon said security teams scan AI models hosted on open repositories using Cisco’s own AI capabilities to detect malicious behavior.
“We work with Hugging Face and our Cisco threat team, and we scan those models to look for anything malicious in there,” she said.
Despite the novelty of LLMs, Chacon emphasized that securing them does not require abandoning cybersecurity fundamentals. Instead, it requires applying those fundamentals consistently while keeping humans engaged rather than disengaged by rote compliance exercises.
“You need to have your cyber vegetables eaten,” she said, borrowing a phrase coined by a colleague.
Those basics include multi-factor authentication, network segmentation and clear policies governing how employees interact with AI tools.
“You always take your grandma’s advice,” Chacon said. “There’s different locks for windows and doors, and you need to find the right locks for those doors.”
Another factor is employee motivation. She warned that training programs fail when employees do not understand why AI security matters to them. She recounted the experience of a friend who was required to complete hours of security training after she may or may not have scanned an unauthorized QR code.
“She was like, ‘Why am I sitting here and just clicking buttons? This has no stake to me,’” Chacon said.
As enterprises rely more heavily on LLMs to generate content, analyze data and support decision-making, Chacon said accountability must remain with humans.
“There’s this great quote from IBM (that says) a computer can never be held accountable, so it should not make management choices,” she said. “It’s all humans for the important things.”












