Comments On “Junk your IT. Now. Before it drags you under”

<http://www.theregister.co.uk/2015/10/15/junk_your_it_now_before_it_drags_you_under/> says: “Our computers, he argued, have become more complex and less reliable.” Yes, they have become more complex, but no, they haven’t become less reliable. Consider this paper <https://www.bell-labs.com/usr/dmr/www/retro.ps> from 1977 by Dennis Ritchie, entitled “The UNIX Time-sharing System—A Retrospective”: it says “the typical period between software crashes (depending somewhat on how much tinkering with the system has been going on recently) is well over a fortnight of continuous operation”. Today, our Linux servers can have uptimes measured in months or even years. I can remember in the 1990s, there was a big gulf between the hardware reliability of commodity PCs and more expensive “workstations” or “servers”. Remember, those were the days when powerful, multitasking, multiuser OSes with proper memory protection, like Linux, the BSDs, the ones carrying the “Unix” brand, and Windows NT, were just starting to be able to run on commodity PC hardware. And the result was a lot of hardware crashes. Nowadays, there is no longer this great difference in hardware reliability. The main difference with servers is inclusion of management features (e.g. hot-swap bays, remote consoles), not the quality of the hardware.
participants (1)
-
Lawrence D'Oliveiro