Home United States USA — software GTC 2022: NVIDIA flexes its GPU and platform muscles

GTC 2022: NVIDIA flexes its GPU and platform muscles

76
0
SHARE

NVIDIA’s CEO Jensen Huang’s 1 hour, 39-minute keynote covered a lot of ground but the unifying themes to the majority of the two dozen announcements were GPU-centered, and its platform approach to everything it builds.
Follow VentureBeat’s ongoing coverage of Nvidia’s GTC 2022. >> Nvidia packed about three years’ worth of news into its GPU Technology Conference today. Flamboyant CEO Jensen Huang’s 1 hour,39-minute keynote covered a lot of ground, but the unifying themes to the majority of the two dozen announcements were GPU-centered and Nvidia’s platform approach to everything it builds. Most people know Nvidia as the world’s largest manufacturer of a graphics processing unit, or GPU. The GPU is a chip that was first used to accelerate graphics in gaming systems. Since then, the company has steadily found new use cases for the GPU, including autonomous vehicles, artificial intelligence (AI),3D video rendering, genomics, digital twins and many others. The company has advanced so far from mere chip design and manufacturing that Huang summarized his company’s Omniverse development platform as “the new engine for the world’s AI infrastructure.” Unlike all other silicon manufacturers, Nvidia delivers its product as more than just a chip. It takes a platform approach and designs complete, optimized solutions that are packaged as reference architectures for its partners to then build in volume. This 2022 GTC keynote had many examples of this approach. As noted earlier, the core of all Nvidia solutions is the GPU and at GTC22, the company announced its new Hopper H100 chip, which uses a new architecture designed to be the engine for massively scalable AI infrastructure. The silicon features a whopping 80B transistors and includes a new engine, specifically designed for training and inferencing of transformer engines. For those with only a cursory knowledge of AI, a transformer is a neural network that literally transforms AI based on a concept called “attention.” Attention is where every element in a piece of data tries to figure out how much it understands or needs to know about other parts of the data. Traditional neural networks look at neighboring data, whereas transformers see the entire body of information. Transformers are used extensively in natural language processing (NLP), since completing a sentence and understanding what the next word in the sentence should be – or what a pronoun would mean – is all about understanding what other words are used and what sentence structure the model might need to learn.

Continue reading...