U.K. chip designer Arm holds a whopping 99% share of the smartphone market, according to Chris Bergey, senior vice president and general manager of its client line of business. Arm is now seeking to make inroads into the Windows PC market, and make a big splash in edge computing where AI is processed on the device.
The AI Innovator sat down with him recently to talk about where Arm is headed next. What follows is an edited transcript of that conversation.
The AI Innovator: Tell me what you do at Arm.
Chris Bergey: I run one of our business units. We have four business units and I’m responsible for the client line of business. We’re most known for our smartphones footprint. That’s a big part of Arm and a big part of my business. But we’ve expanded that now into more of the PC space and we’re also into significant computing platforms for everything from Visual TV to emerging areas like AR/VR and the like.
What other mobile devices are you in and can you share your market share?
We think that 40% of PCs and tablets will be based on the ARM architecture that ships this year. That’s actually a growing number. Arm has always been quite significant in the tablet space because many are based on our mobile processors. Apple made a significant change over the last couple of years to go all in on the Arm architecture in their PC products as well with the mobile and tablet.
Now we’ve got significant traction around Windows on ARM and obviously that’s been the AI copilot PCs that have launched. And then lastly, Chromebooks with AI. Google is making an additional push around Gemini integration and all those kinds of things. So that’s an area that is expanding for us right now. In Chromebooks, we’re only about maybe 25% share. The rest of that is largely Intel, so lots of opportunities as well.
What is your secret sauce to get such high penetration?
Arm has been around now for over 30 years. Our secret sauce has been a couple of things. One, we’ve always been focused on power performance (that is important for mobile devices). … What’s been interesting is that power performance has become important everywhere. Apple was able to show what is possible with all-day battery life, not having to worry about charging.
But the interesting thing is we even see significant interest in power performance and that’s really been a big part of our success in automotive, especially as these become kind of data center on wheels and the amount of computing that needs to happen.
Lastly, even in the data center where with AI power has become the number one limiter. How much density can you get in your 25-kilowatt, 100-kilowatt rack, so it’s been very good for us. The other part of it is … software compatibility. You have 22 million developers that are developing on Arm and so that software footprint becomes so important. And then that’s been a huge transition for us.
Your design CPUs, not GPUs?
We actually have the highest volume GPU in the world as well. That’s something we’re not necessarily as well known for, but we have our Mali GPUs and they are the highest volume GPUs in the mobile segment. … There are a couple of different GPUs that exist in the mobile handset space, but Mali is the highest volume and also is right now probably the highest performing GPU out there.
Does edge computing mostly just use CPUs?
No. It’s something called heterogeneous computing. That means it’s CPU, GPU and if there’s an accelerator element to it, like an NPU.
Let’s just say you buy a Samsung phone and the Samsung phone has a camera app. That’s first party software. Samsung can tune that camera app to utilize all pieces of the hardware so they’ll use the CPU for super low latency stuff. If there’s a hardware accelerator like an NPU, they’ll use that for super long use cases since cost per watt is an important thing. And then they’ll use the GPU to do some graphics things on the picture I just took or your video.
What’s next for Arm?
We want to continue to drive more performance. I’ve been in the semiconductor industry for over 25 years and it’s just amazing how insatiable people are about performance. The use cases that we do today and use cases tomorrow will surprise us. But I think the real thing that we see coming is AI inference on the edge.
Obviously, AI has really transformed the data center and the data center is where the training will happen and will continue to happen. But we believe that inference will take an enormous amount of computing that will be required for all these use cases. And much of that’s going to be wanting to do it at the edge.
It’s done at the edge for a whole set of reasons. There’s privacy reasons. There’s power reasons of literally trying to get the power out in the data center and do that at the edge. There’s connectivity, networking issues, all those sensors, latency and responsiveness.
We believe that’s going to be the next big workload and so that’s one of the things that we’re focusing a lot on. Also, as the models get smaller, it allows you to more easily push them to the edge because while AI is a computing problem, it also in many ways can have a memory problem depending on how large the models are.
Be First to Comment