A host of novel chip features including analog computation will deliver a more-efficient, more reliable, more secure AI processor for multiple markets, CEO Venkat Mattel claims.
The battle to change the computer industry so that machines can better compute artificial intelligence tasks, especially deep learning, continues to birth new and interesting potential future stars. On Monday, Ceremorphic of San Jose, California, formally debuted chip efforts that have been kept in a stealth mode for two years, discussing a chip the company claims will revolutionize the efficiency of A.I. computing in terms of power consumption. « It’s counterintuitive today, but higher performance is lower power, said Venkat Mattela, founder and CEO of the company, in an interview with ZDNet via Zoom. Mattela believes that numerous patents on low-power operation will enable his company’s chip to produce the same accuracy on signature tasks of machine learning with much less computing effort. « What I’m trying to do is not just building a semiconductor chip but also the math and the algorithms to reduce the workload, » he said. « If a workload takes a hundred operations, I want to bring it down to fifty operations, and if fifty operations cost less energy than a hundred, I want to say mine is a higher-performance system. » Mattela is wading into a heavily contested market, one where startups such as Cerebras Systems, Graphcore, and SambaNova have received vast sums of money, and where, for all their achievements, they still struggle to topple the industry heavyweight, Nvidia. Mattela is inclined to take the long view. His last startup, Redpine Signals, was built over a period of fourteen years, starting in 2006. That company was sold to chip maker Silicon Labs in March of 2020 for $314 million for its low-power Bluetooth and WiFi chip technology. (The chip is now being used in the recently introduced Garmin Fenix 7 smartwatch.) Also: Meta says it will soon have the world’s fastest AI supercomputer The lesson of that seventeen-year effort at Redpine and now at Ceremorphic is twofold: « I have a lot of patience, » he observed of himself with a chuckle. And, « I don’t do incremental things. » Mattela contends that when he takes on a problem in an area of chip design, it is in such a way as to get meaningfully ahead of the state of the art. The Redpine wireless chip technology Silicon Labs bought, he said, went up against the offerings of giant companies, Qualcomm and Broadcom in Bluetooth and WiFI. « I took a big challenge, I went against them, but only with one metric, ultra-low-energy wireless, twenty-six times less energy than the best in the industry, » said Mattela. Now, Mattela believes he has a similarly winning focus on power, along with three other qualities he deems both unique in the AI chip market and essential to the discipline: reliability, quantum-safe security, and an ability to function in multiple markets. To make all that possible, Mattela held onto the microprocessor assets that had been developed at Redpine, to form the foundation of Ceremorphic, and retained eighteen employees from that effort, whom he has complemented by hiring another 131 people. The company has offices in both San Jose, the official HQ, and a gleaming new office building in Hyderabad, India. Also: Cerebras continues ‘absolute domination’ of high-end compute, it says, with world’s hugest chip two-dot-oh Mattela has an intriguing list of 26 U.S. patents with his name on them, and an equally intriguing list of 14 U.S. patent applications from the last few years. What Mattela dubs a « Hierarchical Learning Processor, » or HLP, consists of a computing element for machine learning running at 2-gigahertz; a custom floating-point unit at the same clock frequency; a custom-designed multi-threading workload scheduling approach; and specially-designed 16-lane PCIe gen-6 circuitry to connect the processor to a system’s host processor such as an x86 chip. The last of these, the PCIe part, could almost be its own company, claims Mattela. « Right now, what is in production is PCIe-4, the dominant one, and PCIe-5 just started last year, » explained MattelA. « And with us, PCIe-6 will be in production in 2024 — I own that technology.