- Financial firms see responsible AI as a top driver of ROI, with 57% citing standards as key to generating business value.
- Adoption of responsible AI still lags: Only 12.7% have fully integrated responsible AI practices and just 7% monitor models in production.
- Unified platforms and stronger collaboration can accelerate ROI, while human-AI teamwork is emerging as a key innovation driver.
Financial services firms are moving beyond the hype of generative AI and into a decisive stage of operational maturity, where the focus is shifting toward responsible AI as a foundation for measurable business value.
That’s the finding of FICO’s 2025 global survey of 254 C-suite leaders and the perspective of the company’s chief analytics officer, Scott Zoldi, who has been advocating for responsible AI for nearly a decade.
“About nine years ago or so, I really started to speak about responsible AI as an important term for our industry,” Zoldi said. “We see it as tremendously important because it allows us to make sure that models are built appropriately: AI is safe, auditable, explainable and built robustly.”
The survey, conducted in partnership with Corinium Global Intelligence, underscores that message. More than half of respondents (57%) highlighted defining responsible AI standards as a leading contributor to consistent ROI over the next 18 months, outpacing generative AI.
Decision intelligence systems − platforms that integrate AI into business decisioning with explainability, monitoring and scalability − were ranked the top ROI driver by 60% of chief analytics and AI officers.
“The only way we’re going to see any value out of the golden age of AI is if we make sure we’re focused on business problems first, and then on where AI can be a tool to solve the business problem,” Zoldi said in the report, of which he is a co-author. “I’m a big driver of responsible AI standards and how it enables operationalization of AI, so I’m pleased to see it’s listed as a top ROI driver for those surveyed.”
Gap between principles and practice
Yet the research points to a stark reality: While the importance of responsible AI is key, adoption lags. Only 12.7% of organizations report having fully integrated AI operational standards, which include bias mitigation, performance monitoring and secure data handling. Just 7% say they have model monitoring in place. That means once models go live, they have minimal oversight.
“It doesn’t matter if you build a model appropriately and responsibly if you can’t deploy it and monitor it in production,” he said. “If only 7% feel that they do this fully, then that’s a concern; that’s an area for growth.”
The survey said organizational silos are partly to blame. Seventy-two percent of chief analytics and AI officers cited insufficient collaboration between IT leaders, analytics teams, and executives as a major barrier.
“There’s a huge amount of disconnect there,” Zoldi said. “Too often, an AI team is treated like a bunch of propeller heads that sit in a room and then toss things over a wall. And so a lot of organizations are not prioritizing collaboration.”
Cooling on generative AI
In the survey, generative AI landed in fifth place out of seven main drivers of ROI over the next 18 months. Zoldi sees this as a sign that the industry is maturing.
“Many people are softening a little bit on generative AI, and I think our industry is simply getting back to the fact that we’re trying to make smart decisions,” he said. “I was happy to see that the industry is, en masse, recognizing that not every AI problem is a generative AI problem, and getting a little bit more serious about going after that sustained value for their businesses.”
Agentic AI − the next frontier of autonomous systems − ranked even lower, in sixth place for the largest impact on ROI over the next 18 months.
For Zoldi, the connection between responsible AI and ROI is clear. His experience with a major bank revealed that without standards, 90% to 95% of models failed to make it to production due to governance or explainability hurdles. “Nine out of 10 efforts back then would just be completely wasted energy,” he said. “This concept of responsible AI instills a standard … making sure that everybody in the organization is aligned on this – it’s how we build models.”
The business case is also about risk avoidance. Without monitoring, Zoldi warned, firms may “be harming your customers. The model may become biased … because data shifts and trends shift, and that model needs to be performing every single day.”
Unified platforms as ROI accelerators
The survey found that half of all respondents believe a unified AI platform, combined with better cross-functional collaboration, could boost ROI by 50% or more. A quarter of executives think this could double returns. Zoldi echoed their beliefs that unified platforms are key to scaling successfully.
“A unified AI platform and a platform that specializes in the operationalization of AI is crucially important,” Zoldi said in the report. “Typically, these platforms are designed to re-enforce standards, enabling more efficient innovation cycles for faster, new, and, importantly, successful AI development and deployment.”
The research also found strong optimism around human-AI collaboration as a driver of innovation. Zoldi underscored that point, saying that fears of mass job replacement are misplaced.
“The best way to look at human AI collaboration is basically to say AI is a tool that enables the human to do a superhuman job,” he said. “Instead of human in the loop, the AI is now in the loop.”
It also means AI is not a replacement for that human worker, Zoldi said. “If you replace them with AI, you’re going to get a substandard employee.” On the flip side, “if you have employees that are scared of or don’t want to use AI, then they’re not going to be the very best employee they could be in this age of AI.”
Despite the hurdles, Zoldi believes responsible AI is “finally” entering its “heyday.” “Responsible AI continues to be a key driver − something I’ve pioneered − and I’m thrilled to see it recognized,” he said in the report. “The experimentation time is over. Getting it done well − that’s what matters now.”











