Домой United States USA — IT Intel's Core Ultra Processors Accelerate AI Tasks While Saving You Battery Life

Intel's Core Ultra Processors Accelerate AI Tasks While Saving You Battery Life

170
0
ПОДЕЛИТЬСЯ

Intel officially has its AI laptop chips in the race to take on AMD, Nvidia and Qualcomm combined with new, more powerful integrated Intel Arc graphics.
Intel’s latest generation of its Meteor Lake mobile chip architecture has been trundling down the road for so long that the announcement of the actual chips has been somewhat anticlimactic — especially since it doesn’t offer much reason for most consumers to use AI. (The goal is still selling it to software makers.)
My colleague Stephen Shankland and I have already covered the new low-power E-core, which is intended to run light sustained workloads (think video streaming) without hitting the battery as much as the regular old E-cores. We understand why this AI push is so important to Intel’s business. We know how Intel rebranded the Meteor Lake chips as Core Ultra. The chips are made using the company’s latest Intel 4 process. And Intel is last to the announcement party; AMD and Qualcomm have already planted their flags. 
While the new chips have a small, two-core neural processing unit, or NPU, for AI acceleration, Intel’s AI Boost uses the CPU and GPU as well, depending upon the type of workload: The CPU is used when speed is needed and the GPU for help with generative AI workloads. 
That’s why you’ll hear Intel and AMD talk about AI performance metrics — TOPS, sometimes referred to as TeraOPS, or trillion operations per second — for the combined system rather than just the NPU. For the Ultra 7 165H chip, you get roughly up to 34 TOPS with 11 TOPS for the NPU, 18 TOPS for the GPU and the rest for the CPU. 
In comparison, AMD says its XDNA in the new Ryzen 8040 series performs at 39 TOPS, attributing 16 TOPS to the NPU and the remainder to the CPU and GPU. So even though NPUs are getting big buzz, they actually only carry a small part of the AI workload in these chips.
But these numbers also only paint a part of the performance picture for AI because there’s no «typical» AI workload, and at the moment, no consistent methods of implementing it  — there’s still a lot of shakin’ going on in the software.

Continue reading...