When it comes to training and inference workloads for machine learning models, performance is king. Faster system performance equates to faster time to results. But how do you objectively measure system ML training and inference performance? In a word, look to MLPerf.
"MLPerf is a machine learning benchmark suite from the open source community that sets a new industry standard for benchmarking the performance of ML hardware, software and services. Launched in 2018 to standardize ML benchmarks, MLPerf includes suites for benchmarking both training and inference performance. The training benchmark suite measures how fast systems can train models to a target quality metric. Each training benchmark measures the time required to train a model on the specified dataset to achieve the specified quality target. Details of the datasets, benchmark quality targets and implementation models are displayed in the table below..."
If you want to find a list of the 20 best electric toothbrushes for 2020, you have plenty of options
"Where do you go when you want to see a list of the top 10 data warehouse technologies?-probably not your regular social media channels. For those of you who don't follow database benchmarking news, it's the Transaction Processing Performance Council (TPC) that provides the most trusted source of independently audited database performance benchmarks. Two of TPC's primary activities are creating good benchmarks and creating a process for auditing those benchmarks..."
See all Archived Dell EMC News articles
See all articles from this issue