Press "Enter" to skip to content
Credit: Pikisuperstar on Freepik

Executive Q&A: Google Cloud’s Global Head of Regulated Industries on the ROI of AI

For all the excitement around generative AI, the one factor that has proven elusive is the return on investment. But the tide is turning. According to Google Cloud’s latest survey, “The ROI of Gen AI in Financial Services,” nearly eight out of 10 with use cases in production are seeing ROI.

Here are the highlights of the report:

  • 63% have moved gen AI use cases into production. Of this group, 77% are seeing ROI on at least one use case.
  • 90% with gen AI in production reported revenue gains of 6% or more.
  • 78% can move a gen AI use case from idea to production in six months.
  • 50% that cited productivity improvements say employee productivity has at least doubled.
  • 61% using gen AI in production saw “meaningful” improvements in security.
  • Top use cases: Customer and field service (69%), sales and marketing (62%), individual productivity (57%), back office/business processes (57%), new products and services (56%), enhanced customer experiences (53%), and engineering/developer productivity (46%).

We recently caught up with Zac Maufe, Google Cloud’s global head of regulated industries, to discuss gen AI’s role in financial services – and why companies are now seeing real benefits.

What follows is an edited transcript of our conversation.

The AI Innovator: When generative AI first came onto the scene, financial services firms were excited about it. Two years down the road, how are these institutions viewing AI?

Zac Maufe: There was a lot of excitement when generative AI exploded on the scene. But then very quickly, there came a little bit of a sobering within financial services because of the questions about compliance, reliability, controls – those kinds of things. Last year, we went through a lot of PoC (proof of concept) madness, and everybody was coming up with use cases. Now we’ve moved from potential into real outcomes, and that’s a really exciting place to be. Gen AI captured a lot of people’s attentions, but we were very thin on results. Now we’re seeing early adopters already starting to get things into production, and they’re starting to see real results.

Lots of people tried lots of things, and there are a few things that are starting to stick and we’re also seeing people start to scale. One of them is in the customer contact center. … (AI can) essentially help accelerate the call by finding within the knowledge system the answer to the customer’s question – using search and then using generative AI – to provide a succinct answer that’s grounded in the knowledge management system. And they’re seeing a 70% reduction in call handle and search times, which is a major boost of productivity for them.

The other type of (popular) use case is the back-office transformation. How do we get expensive workers, like research analysts, … to be more productive? … Summarization is table stakes; we’re now getting into reasoning. So AI not only gives you a summary, but it is starting to make decisions or recommendations. … Now all of this is happening with humans in the loop, because of regulations. But a lot of customers are starting to see real efficiency or revenue gains.

How do you prevent things like the Air Canada incident, where its AI chatbot made up a bereavement policy that travelers can get a discount after they’ve purchased the fare. It turned out to be false. The airline denied the discount and was taken to court, where it lost.

The corpus of data in the call center is actually very good for AI, because there’s almost always a knowledge management system that has been created for real world call center agents. So if you work in a call center, and you don’t know the policy on bereavement, there is a knowledge management system that you can go into and actually find out what are the rules. Normally, searching for this takes forever since not many people call about bereavement. The agent will have to put you on hold, find the information, read it and try to understand it because it’s so complex.

(In our case), what we do is ground our search. We have a real differentiator here because we’re using search and LLMs, and we’re grounding that capability in the customer’s knowledge management system. It can’t say what’s not in the knowledge system. … Also, in terms of direct-to-consumer in a regulated space, most people (in the U.S.) are still using the call center agent as the decision-maker. … That’s a big difference from the Air Canada example.

When it comes to low-hanging fruit, what’s next for AI in finance?

The next step on this journey is the modern technology, and that’s what’s really going to be critical in order to scale AI in financial services. Quality matters; this cannot be mostly right. This has to be 100% right. (Google Cloud and Citi recently announced a multiyear cloud and AI agreement.)

What we’re seeing is that in order to get the long term benefits of AI, they actually have to start with modernizing their core technology. How do I focus on my data as an asset? The difficult thing is, unlike in the contact center, the data (in the enterprise) are very fragmented and siloed. And so figuring out how to bring your data together and create a unified data strategy around how you think about your data, how you manage data quality, how you handle the governance stuff – all the non-exciting stuff – is critical for how you scale and drive AI for the future in a responsible way.

The other piece of this is you want to use the right level of control, with security and reliability. You want to make sure that you are grounding and doing it the right way. We’re also making sure it is reliable (because if the contact center is) down for three hours, that doesn’t work.

The last part of that is explainability. So within the model risk governance part, … you’re holding the model to the same bar as a human. … Just like, if you’re a human and you’re responsible for saying, “This is yes or no.” You have to be able to go back and explain the logic behind these decisions.

Google just released a paper that basically says you don’t need new financial frameworks for risk assessment in gen AI. Can you give us a summary of what you’re recommending?

Basically, what we’re saying is those same approaches that we’ve used for traditional AI, we think we can apply to this new flavor of AI. Some of the techniques haven’t all been invented, but we think that’s the road we want to go down.

AI has an element of randomness that makes it so wonderfully creative, but it’s that same randomness that people in heavily regulated industries like banking and finance find difficult to handle. How do you solve that?

The technology is changing so rapidly. A lot of the things that we thought were problems are already starting to get solved, which is really exciting. Part of that is the multimodality thing where you have multiple interactions and working together. And so if you’re looking for a statistical outcome, then you’re not going to use a large language model (LLM).

There’s also an understanding of how to ground these things in a very different way. People are starting to experiment and figure out how to basically enable the model to only look at a very specific dataset and only answer from that very specific dataset. It is not creating something new that didn’t exist in that dataset. … That’s the direction of where things are going.

Is there’s a place for agentic AI in finance? For example, getting a robot to trade for you. Would you trust it with your life savings?

Ultimately, over time, we will get there. There are three steps on this journey. The first is (the ability to create content) in a truly personalized, customized way. We now have the tools to start to be able to do that, which is incredible. The second phase is reasoning, which is where we are. It’s the cutting edge today. For example, based on what this customer just told me on the call, and based on our policies and procedures from our knowledge management system, what is it that we should be doing for that customer? What should I recommend that the call center agent do?

The next stage of that is the AI agent, which is the agentic future we’re all talking about. Once I have an AI that’s able to synthesize large amounts of information, it can come up with a recommendation for what the next action should be and then actually do the action. In finance, we’re not there yet.

What trends are you seeing in AI in the financial services industry?

The overarching trend is people are really moving from PoC into production and seeing results – whether that’s coding results, whether that’s customer service stuff, whether in market research. We’re starting to see results, and companies like Moody’s, Scotiabank, and Deutsche Bank, have started to see the power (of AI).

Can you share a use case from one of your clients?

We have customers who have rolled out Gen AI. I can’t name names, but they are big companies with 50,000 or more employees. They rolled out the ability to do internal search and knowledge management using AI, and that’s been a big win for customers. A lot of what we see is enhanced internal productivity. That’s a great use case for us, financial research, where AI can handle multimodal data – it’s listening to earnings calls, viewing a webinar, and in real time synthesizing information and correlating with other things.

In the old days, you would be taking a transcript. You can now use AI to analyze thousands of pages in real time, and find out if what they just said is related to what they had said in the past. Is this a new direction? … That’s a really exciting thing to be able to do. … We also see (the use of AI) in the sales prospecting space, using Gemini to build email generation tools to customize communication with clients. Another use case is security. Security is a massive area, using AI to handle complex threat detection.

Author