
pretty much identical. A DEC Alpha, x86-32, x86-64, Sun Sparc, they are all identical with slight hardware changes. I can push out a config to all of these machines and be reasonably confident that it will work on all of them.
How many non-x86-32 machines do you admin? For interests sake, does anyone here actually have a "production Linux server" (other than "Look, our flat has a SGI coffee table") running something other than x86?
At the moment none. However I have had in the past had to support Linux on non x86 hardware, and I've had to help people with FreeBSD/Solaris/OpenBSD on Intel/Other hardware. And given the option, having the same OS/Distro running on them all is a great benefit. (Ever tried killall -9 broken_program on solaris? It's amusing.)
I'd like to see the BSD tiers applied to Debian, so it goes stable on i386, and then the developers for the 1% of platforms can sort it out without holding back the other 99%. By admining a non-x86 machine you've already demonstrated what hopefully equates to "more than a clue"...
Then the other archs never release. What holds Debian releases back (from what I've seen) has usually been things like the Install floppies, which haven't been ready on any platform.
I once saw the command that Debian should rename their "versions" from:
Stable -> Enterprise Testing -> Desktop Unstable -> Developer
Enterprise doesn't just imply "stability" (More often than not, it doesn't imply that at all! Enterprises rarely ever have one machine that does anything; failure of any part is fine as long as the cluster stays intact, how else do you think you get Windows uptimes longer than security patch releases?).
Ok, Enterprise perhaps isn't the best term. What I see is a platform that is "static", "unchanging". I can rely on it being the same tomorrow as it was yesterday. If I'm administering something this is what I want.
Testing, we go back to what Daniel said in a recent email; packages automatically go from unstable to testing in 10 days with no bugs. This means testing isn't much more than an old unstable, with just as many chances for there to be a random bug which your desktop user might find after a month.
But, if there is a bug, then it's not pushed through to Testing. Testing and unstable are usually VERY different, otherwise people would just run Testing instead of Unstable (so they don't get the packages that have bugs in them). The reason for this is that for something to move into testing it has to wait some number of days (usually 10, but it does vary) and everything it depends on must be in testing, and everything required to build it must be in testing.
I feel bad about trusting servers to Fedora; we all moved away from Red Hat to Debian once upon a time anyway (hey, it was sold to our resident RHCE!), but the dissent is getting louder. I'd love to use White Box Enterprise, the SRPM-recompile of RHEL3, but then you get the same "it's as unsupported as Debian is" issue.
I've been burnt one too many times with Redhat's release cycle. Discovering you have to rewrite large applications because PHP now comes with register_globals off. Discovering that they've changed C++ compiler versions to a version that only exists in Redhat, so you need to recompile all your binary packages. Discovering that they've moved the directory of something, so that your files are no longer being referenced. Discovering that they've deprecated something all together so it's gone. I've ridden the waves when Redhat moves first. I remember moving to Glibc (oops! Nothing compiles anymore!), and PAM (Oops, forgot to clean up a lock file), or .... I eventually got irritated enough to move. I'm always keen to hear of alternatives, or if they have changed their ways enough to make me want to switch back.