Tech for tech's sake with niche uses that traditional hardware can handle
Opinion If you haven't heard of neural processing units (NPUs) by now, you must have missed a year's worth of AI marketing from Intel, AMD, and Qualcomm.
In the past 12 months, these AI-focused processors have been touted as the next essential upgrade - one that everyone apparently needs to make the most of artificial intelligence. But is this all just marketing hype, or do NPUs genuinely offer the transformative value they promise?
NPUs are specialized processors within system-on-chips (SoCs) designed to handle AI-specific tasks, like for example, background noise suppression, real-time video enhancement, and basic generative AI functions. Companies including Intel with its VPU in Meteor Lake, AMD with Ryzen AI, and Qualcomm with the Hexagon AI processor have all embedded NPUs into their silicon, claiming they will revolutionize the computing experience by making devices smarter and more efficient. The idea is to offload AI workloads from the CPU and GPU to save power, theoretically improving battery life and providing faster on-chip AI processing.
But are these AI-enabled processors genuinely game-changing features or are they occupying precious die space that could be better utilized to meet the real needs of users?
While NPUs do add efficiency, particularly in mobile devices where every watt saved is valuable, their impact on laptops - where battery life is already robust - is harder to justify. The tasks NPUs handle are largely niche and have a limited impact on the average user's experience. If you're someone who frequently uses voice commands or relies heavily on video call enhancements, an NPU might save some battery life. But for most users, at present it's a nice-to-have feature rather than an essential one. CPUs and GPUs have managed these functions adequately for years, and while NPUs might lower power consumption slightly, the innovation is more about incremental efficiency than offering meaningful new capabilities.
Take Intel's Meteor Lake VPU, for instance. It's marketed as a solution for on-device AI tasks like video call background blurring and noise cancellation - tasks that CPUs and GPUs have been handling effectively. The primary benefit is a marginal boost in power efficiency, which, while useful, is unremarkable when you consider the overall computing experience. AMD's Ryzen AI takes a similar approach, offering efficiency gains without groundbreaking functionality. Qualcomm's Hexagon processor, drawing from its mobile pedigree, brings similar capabilities to laptops but doesn't significantly expand the range of applications for most users.
When discussing NPUs, vendors often highlight TOPS as a metric of performance. Intel's upcoming Lunar Lake platform boasts a 48 TOPS NPU, AMD's Ryzen AI 300 series is capable of 55 TOPS, and Qualcomm's Snapdragon X Elite features a 45 TOPS NPU. These numbers are thrown around as if they have substantial meaning for real-world users.
However, TOPS is a theoretical measurement of peak performance under ideal lab conditions. It's calculated based on the number of multiply-accumulate (MAC) units and operating frequency, but it doesn't necessarily translate to actual performance gains in everyday use. For the average user, these figures are as meaningful as theoretical horsepower in a car they will never drive at top speed.
Including an NPU consumes valuable die space, which could potentially be allocated to enhance more universally beneficial features including CPU cores or GPU capabilities. Using AMD's Zen 5-based Ryzen AI 300 mobile SoC as an example, the NPU occupies about 10-15 percent of the die space - a significant portion. If that space were instead used to add more CPU cores, users could experience noticeable improvements in multi-threaded applications, benefiting developers, content creators, and power users alike.
Alternatively, expanding the integrated GPU could offer better graphics performance, a feature that would be appreciated by gamers and professionals using graphics-intensive applications. Given that GPUs have traditionally been the go-to hardware for AI workloads, enhancing GPU capabilities could serve a dual purpose.
Manufacturers promote NPUs as essential for future-proofing laptops in an AI-driven world. However, given the rapidly evolving nature of AI, it's challenging to predict which hardware features will remain relevant. While NPUs do offer some advantages for specific AI tasks, most users are unlikely to notice their absence. The majority of everyday computing tasks - like web browsing, document editing, and media consumption - do not require AI-driven optimization.
Even for users who occasionally interact with AI-powered features, CPUs and GPUs are typically sufficient for handling these workloads, albeit with slightly higher power consumption. The promise of NPUs lies more in potential future applications than in current, tangible benefits for the average consumer.
While AI has many practical applications - such as speech-to-text conversion and real-time translation tools - the inclusion of NPUs in laptops feels premature. The technology seems to be a solution in search of a problem, driven more by marketing strategies than by actual user demand. Until AI applications become truly mainstream and indispensable in daily computing, NPUs may remain an overhyped feature rather than an essential component.
In the meantime, consumers might benefit more from enhancements in processing power, graphics capabilities, and overall system performance - improvements that offer immediate and noticeable advantages. As it stands, NPUs are an interesting development, but perhaps not the game-changing innovation they're slated to be - at least for users.
PC makers are keen to promote hardware containing an NPU, possibly because AI PCs promise to lift average sales prcies across the sector by five to ten percent. ®