There has been a strong desire for a series of industry standard machine learning benchmarks, akin to the SPEC benchmarks for CPUs, in order to compare relative solutions. Over the past two years, MLCommons, an open engineering consortium, have been discussing and disclosing its MLPerf benchmarks for training and inference, with key consortium members releasing benchmark numbers as the series of tests gets refined. Today we see the full launch of MLPerf Inference v1.0, along with ~2000 results into the database. Alongside this launch, a new MLPerf Power Measurement technique to provide additional metadata on these test results is also being disclosed.
For those that aren’t following the AI industry, one of the key metrics to observe for a number of these AI semiconductor startups is the amount of funding they...12 by Dr. Ian Cutress on 1/4/2021
Since launching their organization early last year, the MLPerf group has been slowly and steadily building up the scope and the scale of their machine learning benchmarks. Intending to...12 by Ryan Smith on 11/6/2019