TLDR
- There is rising demand for compute, which justifies investments in AI data centers, according to Robert Daigle, global head of AI for Lenovo’s Infrastructure Solutions Group.
- According to Lenovo’s most recent CIO survey, over 90% of organizations are planning to increase their AI spending this year.
- Lenovo says warm-water liquid cooling can reduce power consumption by up to 40%, offering a more efficient way to manage the heat density of high-performance AI workloads.
From Texas to Abu Dhabi, billions of dollars are being committed to building out AI data centers around the world. The question shadowing that spending is whether demand is deep enough to justify the build-out.
Robert Daigle, global head of AI for Lenovo’s Infrastructure Solutions Group, said it is warranted. “Infrastructure is now a priority and not a commodity when we think about AI,” Daigle said in an interview with The AI Innovator.
His case rests on two signals he says Lenovo is seeing across the enterprise market: rising budgets and constrained access to compute. According to Lenovo’s most recent CIO survey, “over 90% of organizations are planning to increase their AI spend this year, which is pretty significant,” he said. This “exponential” increase in adoption creates demand for more infrastructure.
Moreover, spending is not limited to central IT teams; he noted that roughly half of AI funding is now coming from line-of-business budgets, creating downstream pressure on infrastructure teams to support those initiatives.
At the same time, access to GPUs remains uneven. “I was working with a customer (recently), and one of their challenges is that in peak times, they can’t get access to the GPUs that they need in a public cloud today,” he said.
The solution is often a hybrid set-up: Enterprises are pairing dedicated, on-premises systems with public cloud resources to ensure guaranteed access during high-demand periods. There’s also growing interest in infrastructure-as-a-service models, Daigle added.
Compute demand, however, carries a side effect of higher energy consumption. Each AI chip consumes four to five times more electricity than it did a few years ago: from 250 to 300 watts per GPU to around 1.2 kilowatts today, Daigle said. That’s akin to going from powering a gaming console to a large microwave.
The result is that “an individual server is consuming exponentially more power than it ever has,” Daigle said.
That shift means the AI build-out is not only about silicon supply; it is about power density and heat extraction. Data centers can end up “consuming as much as the city that they sit beside,” he added, straining local grids.
Lenovo’s answer is to focus on energy efficiency as a systems-level problem.
Liquid vs. air cooling
One solution Lenovo has been using is liquid cooling, which removes heat from high-power chips by circulating water directly across metal plates attached to components like GPUs and CPUs. He said it is more efficient than air cooling because water extracts heat more effectively and avoids wasting energy on excessive cooling.
“Every watt of power that goes in comes out in the form of heat,” he said. Air can remove that heat, but inefficiently at high densities. Lenovo’s Neptune platform instead brings water directly into the server.
“In our Lenovo Neptune portfolio, we have a couple of different instantiations, but I would say the most efficient is what we call direct to node,” Daigle said. In that design, “we actually bring water into the server in a closed loop through the server across cold plates,” he said, adding, “we’re extracting the heat instead of it being dissipated into a traditional air-cooled heat sink that sits on top of the components.”
Using warm, not cold, water
Instead of cold water, liquid cooling uses warm water. “Typically, we see about an inlet temperature of about 45 to 50 degrees C, and an outlet temperature of about 60 degrees Celsius,” Daigle said.
Why warm water? The system is designed around efficiency, not extreme cooling. “We actually find it more efficient to use warm water than cold water because essentially it’s diminishing returns,” he said. “If you go with cooler water, you’re not necessarily able to extract the heat any faster because you still have to transfer that heat from the components to water. But you’re wasting energy on cooling down that water.”
It’s a “unique” approach that Lenovo has been deploying for the past dozen years, Daigle added.
Clients have seen measurable results. “We’ve seen some of our customers achieve up to 40% power reduction or power savings by using warm water liquid cooling versus using traditional air cooling systems,” he said.
For enterprises not ready to plumb entire facilities, hybrid is an option.
“Not everyone’s into a fully plumbed data center where they have water running through all of their servers,” Daigle said. The company also employs “hybrid cooling systems that use a closed loop within the server,” sort of like a “radiator that you would have in your car,” enabling customers to gain efficiency benefits within traditional air-cooled environments.
How AI data centers differ
An AI data center may look conventional from the outside, but its interior dynamics are different. “You would still see racks of servers and storage and networking,” Daigle said. “But as you start to look under the hood, so to speak, you’ll see some unique differences.”
The defining feature is GPU density. “Typically, we’re using a lot more GPUs” on the order of four to eight to a server, he said. In air-cooled facilities, that density translates into higher noise and greater heat dissipation. In fully liquid-cooled environments, “it is significantly different than a traditional IT data center in that it’s very quiet,” he said.
Power allocation assumptions are also shifting. “A traditional enterprise data center typically would only have maybe about 15 to 40 kilowatts per rack allocated,” Daigle said. By contrast, “the eight-GPU systems, those are typically about 15 kilowatts roughly per server,” which may leave room for only one or two such systems per rack.
Traditional data centers were designed for much lighter equipment. A typical rack — the cabinet that holds multiple servers — was built to handle about 15 to 40 kilowatts of power in total. Today, however, a single high-end AI server packed with eight GPUs can consume about 15 kilowatts by itself.
That means a rack that once held many servers might now only be able to hold one or two AI systems before it reaches its power and cooling limits. As a result, “IT administrators, people running data center infrastructure, and architects are going to have to rethink how they build and design data centers in the future to be able to support AI workloads especially as we’re seeing growth in adoption,” Daigle said.
As energy becomes a limiting factor, power sourcing has risen in importance among enterprises. In prior years, companies cited data readiness and in-house AI skills as limiting their AI deployments. “Compute and power are now on that list,” Daigle said. “They’re not commoditized anymore, but they are significant constraints that organizations have to think through as they’re building out their AI strategy.”
Asked about the use of nuclear power, he acknowledged that “there’s been reports in the news around the opportunity to use nuclear power, or a mix of nuclear and renewable and traditional power delivery systems. I think organizations that are deploying at large scale are going to have to think through that constraint of power availability.”
What’s next for Lenovo ISG
Daigle described the enterprise AI market as moving from pilots to production. “What’s really exciting this past year is we’ve seen a lot of proofs-of-concept (POCs) and it’s been an exponential curve,” he said. Lenovo’s latest CIO survey showed that “half of organizations are either exploring or deploying AI at scale.”
The company’s focus, he said, is “building up enterprise-grade packaged platforms and solutions that make it easier for enterprise organizations of all sizes to start and scale their AI initiative.”
Its hybrid AI strategy is not just going to be about infrastructure, but integrating the services, capabilities, infrastructure and software so customers can “start small for their POCs and scale that into larger deployments without a lot of friction,” Daigle added.











