Today’s personal computers will become AI-powered PCs that will deeply know us and execute mundane tasks for us autonomously, according to Tom Butler, executive director of Lenovo’s worldwide commercial portfolio and product management. He should know. Lenovo is the number one seller of personal computers worldwide. As of the second quarter of 2024, Lenovo shipped 14.7 million units and captured a 23% leading market share, according to a report by Canalys.
The AI Innovator recently spoke with Butler to talk about AI PCs, the coming wave of AI agents, and why one day we’ll no longer use the keyboard on PCs to communicate with computers.
The AI Innovator: What is an AI PC, and how is it different from a traditional PC?
Tom Butler: What makes it different from a traditional PC is it has new capabilities built into the device. Lenovo has taken on what we believe is the industry’s most comprehensive definition – we identify five key features that are fundamental to defining what an AI PC is.
First, it has the capability of running a personal intelligent agent, and using natural interaction and a personalized local language model on board. You have a local language model you can interact with naturally, and it’s personalized to you directly.
Basically, having your own ChatGPT in a way.
Exactly. Inside that, and the second piece, is a personal knowledge base. It’s going to take a look at your files, your content directly related just to you. Each of us will have our own personal knowledge base that is unique to us. They also bring a new heterogeneous computing architecture. Traditionally, we’ve had CPU and GPU capabilities, but now we have this new NPU, or neural processing unit, which is designed to run continuous workloads at a lower power state. You now have three engines on board.
The final two components that define an AI PC are an open ecosystem of AI applications – we have a very open acceptance of running these applications on these devices – and privacy and security protection. These five elements that make up an AI PC are all-encompassing. At Lenovo, we’ve built a lens in which to look through and apply the use cases or capabilities of AI PC. We call it 3P, and it stands for Personalized, Productive and Protected experiences.
Sounds interesting. But are these AI PCs going to cost a lot more than the regular PCs?
No, but they are going to drive a premium in the market. There’s going to be more memory requirements for these devices because they’re driving AI workloads that consume some of the memory. So naturally, you’re going to see the configurations move. Traditionally, it might be enough to have 8 GB or 16 GB memory. Now you’re moving to 16 GB or 32 GB memory. In terms of cost, AI PCs are, in general terms, slightly higher, but some of that is driven by the configurations as well.
What can you do with AI PCs that you may not be able to do with regular PCs?
If I think about that 3P framework, it’s really driving a much higher degree of output and efficiency on-device that you could not get before. When it comes to content creation or automating some basic tasks (using AI, it is) effectively giving you time back so you can apply your creative knowledge and creative capabilities to other activities, versus having to do the routine or mundane tasks. That’s some of the immediate gains we’re seeing – and then also driving more personalized experiences.
At (Lenovo) Tech World, we introduced new software called AI Now. It’s effectively an (AI) agent that runs a local, large language model based on Llama 3.1 that handles file management, content summarization, device management – all immediately on your own device, so you’re not having to go up into the cloud (which can take more time). You can run queries against the personal documents that you’ve put into your personal knowledge base and the AI agent gives you those summarizations and other outcomes. So (it brings) a tremendous level of personalization, but also productivity.
Your PC will not be an automated, dumb brick anymore. It’s actually going to be smart.
Yes, exactly. What we’re driving toward is effectively a hybrid AI market as we go forward. Much of the AI capabilities have been in the cloud up to this point, but now, with these new AI PCs coming into the market, you have the ability to drive a lot of these (AI activities) local on-device. There’s still going to be times when you’re going to go to the cloud, and that’s why we express it as a hybrid AI existence. You can run things locally, go up to the cloud as needed, come back and run it locally. That said, a lot of this personalization will happen locally on device.
I think you’ve started shipping these AI PCs.
We actually started in December last year.
How are sales going?
It’s going well. They’re starting to ramp into the market. What we’re seeing a lot of our customers do – from small, medium-size businesses up to large and global companies – is testing, piloting and trialing, determining how they’ll bring AI into their workplace, and then starting to activate those as they identify the different use cases to drive the outcomes they want.
Tell me about the agentic part of AI PCs.
That’s where it’s going to become very interesting. It’s a move we’ve been talking about for the last year-plus around large language models. We’re going to move into large action models in time – and also use smaller language models that are running specific tasks or driving specific actions on-device.
… For example, if you want to plan a trip and if you have the ability to run these action models, you may say, ‘I want to fly from Dallas to Seattle’ during a certain period of time. Now you’ve got several models that may go out and execute different tasks. You’ve got one that could build out your flight itinerary, one that could go out and search for hotels based on your preferences, and you may even have one that goes and starts making recommendations for restaurant reservations. You’ve got these different models that will spin up and then bring back these outcomes to you. That’s really the next phase we’re moving into.
You’re using Llama 3.1 – that’s generative AI. What techniques are you using to limit hallucinations?
We’re running adjacent to the Llama 3 model a Microsoft Pi model as a safety checker, so it screens the output that you get from AI Now. But we’re also limiting the capability of that model, so it’s not meant to be a full-fledged, open model; it’s limited to clearing, indexing and querying your personal knowledge base.
Those would be the files that you have put specifically into your own personal knowledge base, as well as device management queries. It’s not going to give you general or broad answers, which allows us to eliminate or lessen any issues around hallucination. … It’s very structured to be precise and personal to you and for you, on-device and running local.
All of your PCs will be AI PCs – or are they all AI PCs now?
We’re shipping a mix of it a year. We’re in the middle of a transition. We’re transitioning off non-AI PC over to AI PC, just like we see in any level of transition in the market. Historically, it takes some time as we continue shipping some of the older devices as well as the new devices.
What are your plans for agentic AI in 12 to 24 months?
It’s going to be more about the software ecosystem, or the partner ecosystem, as that develops over time, the engine or the hardware is now available, so we can take advantage of running AI locally. So the hardware’s here, the software is being has been created.
This is a new frontier for PCs. How does Lenovo innovate?
We focus heavily on R&D. We’ve doubled our investment in research and development. Today, roughly 25% of our employees are working in R&D or in an innovation role. We’ve committed to adding another $1 billion in investment for AI over the next three years.
I run the commercial notebook business in portfolio for Lenovo, and we are constantly focused on the next three to five year horizon. We’re sitting in 2024, and we’ve basically finalized 2025 where we’re kicking off projects for 2026 and looking at 2027 right now. We’re actively focused on the future. And that’s just the general product management landscape, independent of the R&D efforts that we’re putting in place.
How do you elicit new ideas from your team?
We start and finish with the customer. What customer outcomes are we trying to drive to? We also hold a lot of NDA-level discussions with our customers and talk about features and show them concepts and describe what we’re driving to test feasibility as well as acceptance in market. That allows us to sharpen or course-correct as we’re working through different concepts. We’re constantly (collaborating) in person – also via online surveys and a variety different tools – but a lot are in-person discussions. Internally, we meet on a regular basis to discuss innovative projects and concepts.
How do you ensure that the top leaders are receptive to new ideas? How do you handle resistance to change? You’re a big company.
The beauty of what Lenovo does is we operate effectively in a culture of a challenger mindset. We are constantly pushing ourselves on how to bring innovation to the market. We pride ourselves in having the most comprehensive portfolio. We talk about ownership from pocket to the cloud, because we have Motorola phones to the devices, to the edge, all the way to the cloud, into the data centers with server and storage.
We have this end-to-end approach, and we have expertise across this full spectrum. And so we have the ability to innovate end-to-end like no other company can on the planet – having a level of expertise, and then each team along that continuum is tasked with driving innovation as well. … Even though we are the number one PC player in the market, we still are focused on innovation as part of our core DNA.
What’s your outlook for the AI PC market?
AI … is going to take over the market. All PCs in the three- to five-year time horizon will be AI PCs effectively, and we will be operating with a set of devices and solutions in the market that will give us time back from routine or mundane tasks by using agentic models. But it’s going to be very much a hybrid AI landscape, where we’re on a local device with a very personalized experience, and we can go to the cloud as needed.
The other thing to be on the lookout for is the way we interact with our devices. Whereas we have historically been very much a ‘type in the keyboard’ and you have an output crowd, I think you’re going to see a more dynamic way emerge in how we interact with devices going forward, where we’re having a more natural language interaction conversationally because the level of intelligence they’re going to have on-device is going to be significantly higher than it is today.
Be First to Comment