Is Open Source Innovation Now All About Vendor On-Ramps?

'InfoWorld published an interesting essay from Matt Asay, former COO at Canonical (and an emeritus board member of the Open Source Initiative), about innovation from the big public cloud vendors, which "even when open-sourced, doesn't really help the community at large... All this innovation is available to buy; none of it is available to build. Not for mere mortals, anyway." Google in particular has figured out how to both open-source code in a useful way and make it pay. As Server Density CEO David Mytton has underlined, Google hopes to "standardize machine learning on a single framework and API," namely TensorFlow, then supplement it "with a service that can [manage] it all for you more efficiently and with less operational overhead," namely Google Cloud. By open-sourcing TensorFlow and backing it with machine-learning-heavy Google Cloud, Google has open-sourced a great on-ramp to future revenue. My question: why not do this with the rest of its code? The simple answer is "Because it's a lot of work." That is, Google could open-source everything tomorrow without any damage to its revenue, but the code itself would provide other providers and enterprises only limited ability to increase their revenue unless Google did all the necessary prep work to make it useful to mere mortals not running superhuman Google infrastructure. This is the trick that AWS, Microsoft, and Google are all racing to figure out today. Not open source, per se, because that's the easy table stakes. No, the AWS/Microsoft Azure/Google Cloud trio are figuring out how to turn their innovations into open source on-ramps to their proprietary services. Companies used to lock up their code to sell it. Today, it's the opposite: They need to open it up to make their ability to operate the code at scale more valuable. For them.' -- source: https://news.slashdot.org/story/17/12/02/0833217 Cheers, Peter -- Peter Reutemann Dept. of Computer Science University of Waikato, NZ +64 (7) 858-5174 http://www.cms.waikato.ac.nz/~fracpete/ http://www.data-mining.co.nz/

On Mon, 4 Dec 2017 08:45:42 +1300, Peter Reutemann wrote:
'All this innovation is available to buy; none of it is available to build. Not for mere mortals, anyway."
Or, you could build and run it on your own modest cluster, made from something like a bunch of Raspberry πs <https://list.waikato.ac.nz/pipermail/wlug/2017-November/015481.html>. I think this is called “connecting the dots” ...

I wrote:
Or, you could build and run it on your own modest cluster, made from something like a bunch of Raspberry πs ...
Apropos this reader comment <https://forums.theregister.co.uk/forum/containing/3363988>: BTW, performance of 200 Raspberry PI 3s has absolutely destroyed two full Cisco/NetApp all-SSD 40Gb/sec ACI racks on absolutely every business process by designing our software and system together. For large data processing, Map/Reduce is far better on 200 PIs each with an mSATA SSD than on any NetApp product we’ve seen.
participants (2)
-
Lawrence D'Oliveiro
-
Peter Reutemann