During its 2019 AI Summit this morning, Intel detailed its next-gen Movidius vision processing unit code-named Keem Bay, which is optimized for inferencing workloads at the edge. The chip boasts a new on-die memory architecture that ensures the processor is “fully utilized” and delivers raw throughput performance of roughly 10 times the previous generation, according to Intel vice president of IoT Jonathan Ballon,
Keem Bay is a powerhouse, to be sure. Intel says that at fifth of the the power, it’s four times faster than Nvidia’s TX2 and 1.25 times faster than Huawei’s HiSilicon Ascend 310 AI processor. Moreover, it delivers four times the inferences per second per TOPS versus Nvidia’s Xavier, and Ballon says that customers who take “full advantage” of its deep learning architecture can get 50% additional performance.
It’ll launch in the first half of 2020 in a variety of form factors, including PCI Express and M.2.
“It’ll deliver better than GPU performance at a fraction of the power, a fraction of the size and fraction of the cost on comparable products, and it complements our full portfolio of products, tools and services purpose built,” said Intel vice president of IoT Jonathan Ballon.
AI is an increasingly central part of Intel’s business. The company said during its most recent earnings call that annual revenue from AI reached $3.5 billion in 2019. That’s up from $1 billion a year in 2017, and over a third of the way toward its target of $10 billion by 2022.
“We’re one of the largest [in the market] due to our breadth and depth that allows us to go from a data center out to the edge,” said corporate vice president and general manager of AI products Group at Intel Corporation Naveen Rao. “And we anticipate this growing, year on year.”