
On Mon, 27 Nov 2017 09:02:17 +1300, Peter Reutemann wrote:
'With each of the 750 chips packing four cores, it offers a 3,000-core highly parallelizable platform that emulates an ARM-based supercomputer, allowing researchers to test development code without requiring a power-hungry machine at significant cost to the taxpayer. The full 750-node cluster, running 2-3 W per processor, runs at 1000W idle, 3000W at typical and 4000W at peak (with the switches) and is substantially cheaper, if also computationally a lot slower. After development using the Pi clusters, frameworks can then be ported to the larger scale supercomputers available at Los Alamos National Lab, such as Trinity and Crossroads.'
Is history repeating itself? Consider how minicomputers, and then microcomputers, got in and elbowed aside the bigger machines just a few decades ago: it was precisely that “substantially cheaper, if also computationally a lot slower” argument all over again. Only the economies of scale soon came in to put paid to the “lot slower” part of that qualification...