No exascale for you - at least, not within the High-Performance Linpack (HPL) territory of the latest Top500 list, issued today from the 33rd annual Supercomputing Conference (SC21), held in-person in St. Louis, Mo., and virtually, from Nov. 14-19
'We were hoping to have the first exascale system on this list but that didn't happen,' said Top500 co-author Jack Dongarra in a press briefing this morning.
In an alternate timeline, the United States might have stood up two exascale systems by now: Aurora at Argonne National Laboratory and Frontier at Oak Ridge National Laboratory. Installation continues on the latter, and when we talked to Intel last week, they said that Argonne was preparing for the arrival of Aurora, now slated to be a two exaflops peak machine, doubling its (most recent) previous performance target.
High-performance computing enables organizations to use parallel processing to run advanced programs such as AI and data analytics
"The technology combines the processing power of multiple computers to handle complex computing tasks.
When implemented correctly, high-performance computing (HPC) can help organizations handle big data. The technology requires specialized hardware, software frameworks and often large facilities to house it all. HPC also comes with a high price tag, which can act as a barrier to many organizations hoping to implement HPC in their data centers..."
Let's just cut right to the chase scene. The latest Top500 ranking of supercomputers, announced today at the SC21 supercomputing conference being held in St Louis, needed the excitement of an actual 1 exaflops sustained performance machine running the High Performance Linpack benchmark at 64-bit precision
"And because the 1.5 exaflops 'Frontier' system at Oak Ridge National Laboratories is apparently not fully built and the Chinese government is not submitting formal results for two exascale systems it has long since built and had running since this spring, we're hungry.
Excitement is as vital a component in supercomputing as any compute engine, interconnect, or application framework, and excitement is what competition in the industry is supposed to deliver - aside from the ability to do more science and better science. We are emotional people, living during a global pandemic that may never go away, during what is supposed to be a shining week for supercomputing and socialization amongst its participants, and this is just not right..."
The MLCommons industry group today detailed an upgraded version of MLPerf HPC, its benchmark suite for measuring how fast a supercomputer can train artificial intelligence models
"The group, which is backed by some of the tech industry's most prominent companies, also shared the results from its latest supercomputer performance contest. The contest was carried out using the new version of the MLPerf HPC benchmark suite that debuted today. Eight supercomputing organizations participated.
MLCommons is an AI-focused engineering consortium backed by chipmakers such as Nvidia Corp. and Advanced Micro Devices Inc., as well as a long list of other tech firms, including Google LLC. MLCommons is responsible for developing a popular set of benchmark suites used to measure how fast different types of systems can run AI models..."
This is an update release for the OHPC 2.x branch targeting support for RHEL8 variants and OpenSUSE Leap 15
"In addition to a number of component version updates, this release updates previous CentOS8-based recipes to leverage Rocky8.
Note that users who previously enabled the OHPC 2.x repository via the ohpc-release package will have access to the updates available in 2.4 and no additional repository enablement should be necessary. Please see the Release Notes and documentation for more detailed information..."
See all Archived IT - HPC articles
See all articles from this issue