Press "Enter" to skip to content

EY’s Schuller: Responsible AI Lets Companies Take Bigger Bets

Responsible AI is often defined in risk management terms – guardrails such as ethics and safety protocols. But that’s only half of the picture, according to Sinclair Schuller, EY Americas Responsible AI Leader.

“If you look at the market definition, it often tends to start with something like, ‘Responsible AI is the deployment and creation of AI assets that are not rooted in ethical concerns, bias or discrimination, blah, blah, blah,” he said in an interview with The AI Innovator. “It’s a very verbose, complex definition that focuses on the constraints. I’ve tried to simplify that as much as possible to make it digestible for clients. And I’ve simplified it into the ‘Do No Harm’ form of AI.”

But Schuller argues that the conversation around AI governance needs to expand from merely avoiding harm to enabling innovation. “Every firm on the planet will say that stuff is responsible AI, which, again, that’s a requirement,” he said. “But rarely are people focused on (asking) ‘Mr. or Mrs. Client, how will I help you tackle the next $10 billion opportunity?’”

To explain how companies can balance risk and innovation, Schuller turns to a parenting metaphor. “Imagine that you have a kid who’s riding a bike. You tell them, ‘You get no helmet, you get no knee pads. You get no elbow pads. And then you take your other daughter or son, and say, ‘You get a helmet, you get all pads, and you get knee pads.’ Which one will take bigger risks on a bike? Probably the one wearing a helmet and knee pads and elbow pads.”

The same applies to companies adopting AI responsibly, he said. “You have to equip them with a helmet, knee pads and elbow pads,” Schuller said. That means first, “get governance in place that allows your team members to operate in full fidelity in the context of AI, instead of being nervous that they might step on this landmine or fall off the bike and hit their head.” Once these bounds are established, tell the team to move forward with “confidence, move forward aggressively,” Schuller added.

Guardrails come first

It’s important to set up a governance board, which “helps get things kick-started,” he said, but it’s not the end all. “That governance board, instead of focusing on establishing a set of rules, should identify technologies to adopt that in real time can help enforce responsible AI.”

For some firms, that means assembling internal AI experts for their board. For others, it requires bringing in outside help. “It’s situational,” Schuller said. “If they don’t (have AI experts), I’d highly encourage them to work with an organization that has experts that can be part of that governance board that can guide them on this path.”

Most responsible AI guardrails don’t operate directly at the model level, Schuller said. Instead, they rely on third-party tools or custom-built monitoring systems. “Most LLMs’ guardrails might meet the requirements of responsible AI, so there are third-party technologies that one can deploy in their organization to perform that real-time monitoring,” he said. For more sophisticated frameworks, they can take a bespoke approach.

He described several models companies can use: fine-tuning smaller models within tight parameters; inserting a ‘critic in the loop’ that screens prompts and outputs; or restricting how large language models are used so they function as back-end systems, not open-ended conversational agents.

“It’s often the case that people think generative AI is a dialogue with an agent,” he said. “That doesn’t necessarily have to be the case. You can use an LLM as an implementation detail in a more standard software package.”

Another way to measure ROI

Asked how to measure the success of responsible AI, Schuller said the industry is still developing answers. “That’s an emerging field,” he said. What Schuller asks clients instead is, “What has been your adjacent market opportunities or market expansion that you’ve created as a result of responsible AI? … What bets have worked out, if any, and can you quantify what the magnitude of that was and what the result was?”

He likened responsible AI to a safety net that encourages boldness. “Responsible AI should embolden a company to take bigger and bigger bets,” Schuller said. “Bigger bets don’t mean spending more money. It just means doing things outside of their norm such that they can pursue new revenue opportunities.”

He doesn’t consider risk detection metrics – such as counting the number of biased responses found − to be ROI. “It’s important, but I wouldn’t call that ROI,” he said. “Imagine having an (auto) insurance policy and saying, ‘How much return did I get?’ That implies you’re having lots of accidents.”

As an example of how responsible AI can open new markets, consider a major automaker developing autonomous vehicles. It used responsible AI principles to expand beyond robotaxis into delivery services. “A large retailer, for example, can now integrate with that automotive manufacturer and allow that retailer to have a consumer push a button and have goods picked up at their store and dropped off at their home,” Schuller said.  

EY itself has applied similar thinking. Schuller cited its third-party risk management service that traditionally reviewed some of a client’s vendors periodically − say 5% of them once a year − and submitted a report on whether they are risky or not. “We’re going to be releasing an interesting technology that allows for the continuous monitoring of risk,” he said. “It’s no longer a point in time. It’s instead attached to all the data feeds it has access to, digests that information about the vendor, and in real time, updates the client on changes in the risk profile.”

Schuller’s advice to executives is simple: Empower your teams. “Make your team comfortable (with the idea) that using AI is not only okay, but it will advance the company,” he said. “Bold leaders will make a decision, telling their staff that they should be using AI. They should be focused on how to educate themselves, educate peers, and that’ll pay them back in spades.”

“I can’t think of a technology in history,” Schuller added, “where leaders that don’t use it ended up being successful.”

Author

×