DARKReading: Building an Effective Active Directory Security Strategy (May 19th)

Testing Out HPC On Google's TPU Matrix Engines
The Next Platform, May 2nd, 2022
In an ideal platform cloud, you would not know or care what the underlying hardware was and how it was composed to run your HPC - and now AI - applications

The underlying hardware in a cloud would have a mix of different kinds of compute and storage, an all-to-all network lashing it together, and whatever you needed could be composed on the fly.

This is precisely the kind of compute cloud that Google wanted to build back in April 2008 with App Engine and, as it turns out, that very few organizations wanted to buy. Companies cared - and still do - about the underlying infrastructure, but at the same time, Google still believes in its heart of hearts in the platform cloud.

And that is one reason why its Tensor Processing Unit, or TPU, compute engines are only available on the Google Cloud. (Although you could argue that the GroqChip matrix math units available through Groq are as much of an architectural copy of the TPU as Kubernetes is to Google's Borg container and cluster controller, Hadoop is to Google's MapReduce data analytics and storage, or CockroachDB is to Google's Spanner SQL database.)

See all Archived IT - HPC articles See all articles from this issue