Press "Enter" to skip to content

UST’s AI Chief on AI Hype, Vibe Coding and a Coming Cyber Breach

TLDR

  • AI adoption is advancing steadily in enterprises, but hype masks ongoing challenges around reliability, governance and proving ROI.
  • Most companies are still in early stages, using AI to augment workflows rather than replace them, with cultural change and integration posing the biggest barriers.
  • Fully autonomous AI agents remain limited by reliability and safety concerns, keeping human oversight essential despite rapid progress.

AI hype is obscuring the reality of enterprise AI adoption, where companies are steadily integrating the technology into workflows but still grappling with reliability, governance and return on investment, according to Adnan Masood, chief AI architect at technology services giant UST.

Masood, who works with large enterprises on deploying AI systems, said the current excitement around AI echoes earlier technology cycles such as cloud computing. Early enthusiasm and skepticism often coexist before practical use cases settle in.

“It happens with any transformational technology,” Masood said in an interview with The AI Innovator. “We have seen that with mobile, with internet, with cloud, and now we are seeing it with AI.”

AI may ultimately have a larger impact than those earlier waves because it shifts computing from automating physical tasks to augmenting human reasoning, he said.

“For the first time, it’s not just about computers, it’s not just about communication,” Masood said. “We are essentially offloading our cognitive capabilities.”

UST, the company where Masood leads AI architecture efforts, is a global technology services provider headquartered in Aliso Viejo, Calif. The firm works with large enterprises across industries including finance, health care, retail and manufacturing, helping organizations modernize technology systems and deploy emerging technologies such as AI. UST operates in more than 30 countries and serves many Fortune 500 clients.

Reshaping, not replacing entire workflows

Masood’s role focuses on translating academic advances in AI into business systems that deliver measurable outcomes.

“My job is essentially to bring all of that to our customers,” he said. The goal is to pinpoint “what the business use case is going to look like and what the customers are going to get out of it.”

That practical focus has put him at the center of enterprise debates about AI’s real-world impact. While public discourse may swing between utopian and catastrophic predictions, Masood said the reality inside companies is far more pragmatic.

“The reality is that AI is reshaping workflows,” he said. However, “it is not going to magically erase organizations or change that entire spectrum of how we work.”

Enterprise executives are focused less on speculation about artificial general intelligence and more on practical concerns such as security, governance and cost, according to Masood.

Those concerns help explain why many AI pilots fail to reach production. Deploying AI across large organizations often requires cultural change, governance frameworks and new operational processes, he said. “The biggest (challenge) is the culture shift or change management,” Masood said.

Not AGI yet

Debates about whether AI could pose an existential threat are also gaining attention as leading researchers voice different views. Some prominent scientists, including Nobel laureate Geoffrey Hinton and Turing Award winner Yoshua Bengio, warn about existential risks from advanced AI systems.

Masood said the technology is still far from the level of intelligence required for those scenarios. “The current state of the AI, the large language models, are a step towards AGI,” he said. “But it’s not AGI in its shape or form.”

Today’s models primarily generate language based on statistical prediction, he said, rather than true understanding of the world. “They are the next token prediction autoregressive probabilistic model,” Masood said.

For enterprises, the more immediate shift involves the rise of AI agents that automate tasks and workflows. Companies are experimenting with multiple levels of autonomy, ranging from basic assistants to more complex systems capable of executing tasks across business processes.

Masood said most organizations currently operate at the early stages of this spectrum; there are three ways they are using agents in production.

The first wave consists of assistants that handle summarization, drafting and knowledge retrieval. The second stage involves AI systems using external tools to perform tasks such as calculations, workflow triggers and data extraction.

“The second maturity level I’m seeing is the tool use,” Masood said. “Agents are now starting to use tools.”

The most advanced stage, which he expects to develop over the next several years, involves fully autonomous multi-step agents capable of executing complex business processes.

The problem with full autonomy

However, reliability remains a major obstacle to deploying fully independent agents. “One of the biggest (challenges) is reliability,” Masood said. “Safety, audit needs – these are some of the challenges we are encountering right now for autonomous multi-step operations.”

But Masood believes that these issues will be solved over time.

That was also the case with hallucinations – a persistent challenge where AI systems produce incorrect or fabricated information. It has gotten “much better” over time as new techniques help reduce those errors, he said. “The models are hallucinating less.”

Approaches such as chain-of-thought reasoning, grounding responses in trusted data and retrieval-augmented generation are helping improve accuracy. “Citation, attribution and grounding … brought the hallucinations to a minimum,” Masood said.

Still, domain-specific risks remain in fields such as law, where complex jurisdictional rules and precedent make automated retrieval difficult. “Legal retrieval is hard,” Masood said.

Human oversight remains critical, especially in sensitive industries such as finance and health care. “If there’s an interpretability issue, the human is there to interpret,” he said.

Cyber incident and vibe coding

Looking ahead, Masood said generative AI is going to change the way organizations operate. That will put pressure on vendors to justify their existence by delivering ROI. “People are not going to let up on that,” he predicted.

He also believes that a generative or agentic AI-related cybersecurity breach might occur that will “force people to think about regulations and security around AI.”

Currently, conversations around risk frameworks like the one from the National Institute of Standards and Technology (NIST) “usually take a back seat in conversations,” Masood said. “I hope it doesn’t happen (but) it may actually be one of those crises which will help in getting more regulations in place.”

As for vibe coding – the practice of prompting AI tools to generate code without understanding how the code works – the trend will continue, he said. But it’s going to create technical debt, or the accumulation of bugs and other fragilities over time by rushed coding.

“The long-term impact of that is fragile code in production,” Masood said.

Despite those challenges, the overall trajectory of enterprise AI remains clear. “We are quickly shifting from ‘I don’t trust these models’ to ‘we have to use AI everywhere,'” he said.

Author

×