
At the recent CES exhibition in Las Vegas, there was being shown a lot of new hardware taking advantage of the new USB specs <http://arstechnica.com/gadgets/2015/01/usb-3-1-and-type-c-the-only-stuff-at-ces-that-everyone-is-going-to-use/>. There are 4 quite separate parts to these specs: * the USB 3.1 spec, supporting theoretical data rates up to 10Gb/s * the USB Power Delivery spec, which can supply up to 100W to connected devices * the new “Type C” connector, which has 4 separate physical data channels on it, including one for backward compatibility with USB 2.0 * USB Alternate Mode, which can use some or all of the Type C channels to communicate non-USB data (e.g. DisplayPort). In principle, it is neat to be able to replace all the different kinds of ports on a machine with ports all of a single kind. However, the article suggests that Alternate Mode could be a problem: it seems inevitable that different ports on a machine will be restricted to different uses for the Alternate Mode function. Which means, instead of the current situation where a monitor connector is different from a USB connector so you cannot plug a cable for one into the other, you will have the situation where all the connectors and cables _are_ the same, but if you plug the wrong device into the wrong port, it just won’t work. I remember in an e-mail exchange with Stuart Cheshire a few years ago (he was the mastermind behind the Zeroconf initiative, which Apple implemented as “Rendezvous”—later renamed “Bonjour”—and Linux users know as the “Avahi” packages), he said that there should indeed be just a single type of port on a computer, all speaking the same protocol. And that protocol should be TCP/IP. After all, why shouldn’t you run full TCP/IP between your keyboard, mouse and PC? The silicon it takes to implement the network stack shouldn’t amount to much of a cost these days. Of course, it should be IPv6, not IPv4...