Nvidia tops AI inference benchmarks, also announces 'world's smallest supercomputer' chip for AI tasks

Nvidia tops AI inference benchmarks, also announces 'world's smallest supercomputer' chip for AI tasks

The benchmark looks at inference tests across categories like image classification and object detection.

Advertisement
Nvidia tops AI inference benchmarks, also announces 'world's smallest supercomputer' chip for AI tasks

Nvidia has topped an independent AI Inference benchmarking test called MLPerf. The test measures AI performance in data-centres and at the edge (on your phone, in robots, etc.) in several categories, all of which Nvidia managed to top .

Much like children, AI needs to be taught to do things before it’s allowed into the wild. This is done by first training an AI model in a supercomputer, where it is brought up on a diet of data. The idea is that the AI munches through this data and figures out how to deduce information from it, in other words, the AI learns to infer.

Advertisement

Once trained, this AI model is installed on our phones and PCs, in our smart speakers, in our drones, and in everything else that has the word “AI” plastered across it.

Nvidia has topped a new industry standard benchmark for AI inference performance for edge computing. Image: Nvidia

Google Photos magically classifies your photos and separates dogs from cats because its AI has been trained to do so. The underlying AI model learns to identify the basic features of dogs and cats by analysing millions of images.

Edge computing is when this “inference” happens on your phone directly rather than on a server.

The initial training is computationally intensive and is thus limited to supercomputers. You could train an AI on your PC, but be prepared to wait months, if not years, before you can produce a meaningful model.

Advertisement

The MLPerf benchmark looks at inference tests across categories like image classification, object detection (low and high-res), and natural language translation tests. In data-centre tests, platforms from Google, Habana, Intel and Nvidia faced off, with Nvidia topping the charts.

On the edge computing side of things, Qualcomm, Intel and Nvidia faced off, with only Nvidia participating in the complete test (single-stream and multi-stream across five scenarios). Nvidia’s Xavier platform was even further ahead of its rivals here than it was in the previous test.

Advertisement
The Jetson NX is a credit card-sized supercomputer for AI that can be used in everything from drones to industrial robots.

Alongside these results, Nvidia announced a brand new chip called the Jetson Xavier NX, which Nvidia claims is the “world’s smallest supercomputer for AI at the edge.”

It’s quite a mouthful, but essentially, Nvidia claims that this credit $399 credit-card sized chip offers up to 21 TOPS of performance while only consuming 10-15 W of power. A decent gaming rig might just manage the same, but it will consume a tonne of power, and it’s certainly not small enough to be placed on a drone.

Advertisement

This chip is so small and light that it can be installed in drones, commercial robots, and even factory robots, thus bringing high-performance AI computing to the “edge.” The NX is up to 15x more powerful than the one it replaces.

Latest News

Find us on YouTube

Subscribe

Top Shows

Vantage First Sports Fast and Factual Between The Lines