[Fwd: End of Life for Red Hat Linux 9]

NOOOOOOOOOO this means another server rebuild... which is never a lot of fun. id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else. debian seems to be the shnizzle for keeping a box going through major system upgrades.. i wonder how badly my redhat 9 machine will break doing a fedora upgrade at least i dont have to setup pppoe again... -- Kyle Carter

* Kyle Carter <kyle(a)feet.net.nz> [2004-04-03 15:16]:
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
On a server, this matters.. why? The security updates are very current, even if the version numbers are not. I wouldn't run Debian on a desktop, but for servers it's my #1 choice by far. -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

I used to agree, but after the couple of weeks I've had trying to get certain things (maia mailguard, web stuff using PEAR, web stuff using freetype, etc) I have decided that Debian needs to get with the program a bit more as far as up-to-date releases are concerned. A server is only as good as the applications you need to serve, and if they won't work due to out of date libraries, ancient versions of php, poorly packaged PEAR, whatever...then it's not a very good server platform. I was discussing this with Craig at work, and his idea is that perhaps Debian should stop trying to run on so many platforms, and instead focus on what Linux was originally intended to be - a Free Unix-alike on x86. x86 is what the majority of linux users run after all. Alternatively, perhaps they could take a leaf out of the BSDers book, and have platform tiers, with certain platforms being more up-to-date than others. Any alternative suggestions for fixing this problem? G. ----- Original Message ----- From: "A. Pagaltzis" <pagaltzis(a)gmx.de> To: <wlug(a)list.waikato.ac.nz> Sent: Sunday, April 04, 2004 1:41 AM Subject: Re: [wlug] [Fwd: End of Life for Red Hat Linux 9]
* Kyle Carter <kyle(a)feet.net.nz> [2004-04-03 15:16]:
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
On a server, this matters.. why? The security updates are very current, even if the version numbers are not. I wouldn't run Debian on a desktop, but for servers it's my #1 choice by far.
-- Regards, Aristotle
"If you can't laugh at yourself, you don't take life seriously enough." _______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug

I think the problem is probably contributed to by a few things. One thing that sticks out is the size of Debian. Isn't the next release coming on something like 13 CDs? That is just too damn large. I like the structure of Fedora more. Having a "Core" which keeps moving forward at a good pace and is quite small and have "Extras" that plug into the Core with little hassle. Hell even FC could be smaller. It could drop all the desktop/X packages and have a Fedora Desktop set etc. That way you have sections of the whole OS advance at the pace that best suits that section. For example Gnome and KDE have different release schedules so why force the users of a distro to upgrade their desktop in lock step to the rest of the OS. Split the two out into separate releases/package sets and allow users to track them with yum/apt or whatever. Of course its pointless discussing this in too much detail as we don't have much (read any) influence over distro packaging unless we get involved with those efforts. Regard On Sun, 2004-04-04 at 06:50, Greig McGill wrote:
I used to agree, but after the couple of weeks I've had trying to get certain things (maia mailguard, web stuff using PEAR, web stuff using freetype, etc) I have decided that Debian needs to get with the program a bit more as far as up-to-date releases are concerned.
A server is only as good as the applications you need to serve, and if they won't work due to out of date libraries, ancient versions of php, poorly packaged PEAR, whatever...then it's not a very good server platform.
I was discussing this with Craig at work, and his idea is that perhaps Debian should stop trying to run on so many platforms, and instead focus on what Linux was originally intended to be - a Free Unix-alike on x86. x86 is what the majority of linux users run after all. Alternatively, perhaps they could take a leaf out of the BSDers book, and have platform tiers, with certain platforms being more up-to-date than others.
Any alternative suggestions for fixing this problem?
G. ----- Original Message ----- From: "A. Pagaltzis" <pagaltzis(a)gmx.de> To: <wlug(a)list.waikato.ac.nz> Sent: Sunday, April 04, 2004 1:41 AM Subject: Re: [wlug] [Fwd: End of Life for Red Hat Linux 9]
* Kyle Carter <kyle(a)feet.net.nz> [2004-04-03 15:16]:
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
On a server, this matters.. why? The security updates are very current, even if the version numbers are not. I wouldn't run Debian on a desktop, but for servers it's my #1 choice by far.
-- Regards, Aristotle
"If you can't laugh at yourself, you don't take life seriously enough." _______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug
-- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

Oliver Jones wrote:
I think the problem is probably contributed to by a few things. One thing that sticks out is the size of Debian. Isn't the next release coming on something like 13 CDs? That is just too damn large. I like the structure of Fedora more. Having a "Core" which keeps moving forward at a good pace and is quite small and have "Extras" that plug into the Core with little hassle. Hell even FC could be smaller. It could drop all the desktop/X packages and have a Fedora Desktop set etc. That way you have sections of the whole OS advance at the pace that best suits that section. For example Gnome and KDE have different release schedules so why force the users of a distro to upgrade their desktop in lock step to the rest of the OS. Split the two out into separate releases/package sets and allow users to track them with yum/apt or whatever.
Of course its pointless discussing this in too much detail as we don't have much (read any) influence over distro packaging unless we get involved with those efforts.
However on the flipside, with Debian I know that almost any software I have ever heard of is in Debian. I can "apt-get install whatever" and a few minutes later it's installed. When I ran Redhat, I was constantly irriated by having to find packages and installing them manually. Debian was great because I no longer hand to go and find all the stuff I needed, and keep it up to date with respect to security fixes. People say that Linux has a "dependancy hell" of trying to find the dependancies/versions for software that you want to install. These people must be running slackware or fedora where you don't have the huge package base to install from. In Debian if I needed to install the Redland RDF Parser, apt-get would install it, and all of it's dependancies, and then if a security flaw is found, it will be updated. Under Redhat (and now Fedora), anything extra I have to find and manage myself. Debian is the only distribution that spans multiple architectures with almost every conceivable piece of software. There are almost no distributions that deal with non x86. With my systems administrator hat on, Debian stable is precisely what I want. It's stable and doesn't change. I'm not going tomorrow discover that the machines I administer now run Exim 4 instead of Exim 3 and I have to relearn everything, and then the day after that everything is now compiled with a new version of g++, and that I now have to recompile any local software with the new compiler to talk to the new versions of the libraries. All the systems I run are all pretty much identical. A DEC Alpha, x86-32, x86-64, Sun Sparc, they are all identical with slight hardware changes. I can push out a config to all of these machines and be reasonably confident that it will work on all of them. With my programmers hat on, Debian stable is precisely what I don't want. It's long out of date. So I run Debian unstable. It's pretty close to up to date. Once a week I do a dist-upgrade to see what the latest goodies are. Debian unstable (once I had fully upgraded) has never caused me any hassle. The upgrade from stable was pretty painful, but I suspect that was more my fault than anyone elses. I once saw the command that Debian should rename their "versions" from: Stable -> Enterprise Testing -> Desktop Unstable -> Developer to better reflect the kinds of people that each type is targetted towards. I personally think this is very true, and quite nicely sums up how debian works. Changing the way debian works by making it release more often destroys the Administrators who want things to Not Change. While it is excrutiatingly frustrating having to deal with machines that are 2 years out of date, it's less frustrating that by the time you've got used to the last set of changes, has changed again.

When I ran Redhat, I was constantly irriated by having to find packages and installing them manually. Debian was great because I no longer hand to go and find all the stuff I needed, and keep it up to date with respect to security fixes.
To a certain extent Fedora is making this better. With a select few yum/apt repositories you get the same effect. Obviously if there is no repository for an app then it can be more involved to install. Fedora certainly doesn't have the depth of apps that debian enjoys yet. But this is purely a community size thing. The more people who join the Fedora (and related) packaging efforts the more packages there will be.
People say that Linux has a "dependancy hell" of trying to find the dependancies/versions for software that you want to install. These people must be running slackware or fedora where you don't have the huge package base to install from. In Debian if I needed to install the Redland RDF Parser, apt-get would install it, and all of it's dependancies, and then if a security flaw is found, it will be updated. Under Redhat (and now Fedora), anything extra I have to find and manage myself.
Not entirely true anymore. But the more off the beaten track you go this still tends to be the case. It's pretty rare now days that I go looking too far afield for apps I want on my desktop. Fedora is pretty young and I hope that as time goes by more people will create more domain specific yum/apt repositories for Fedora that plug in to the main Fedora Core and Fedora Extras repositories and don't overlap with other 3rd part repositories. A good example of this is the one at rpm.livna.org which provides most of your DVD/Media watching package requirements. One of the more irritating things I find is the general lack of willingness from app developers to plug into common distributions in a sane fashion. The quality of rpm packaging by software developers leaves something to be desired in many cases.
Debian is the only distribution that spans multiple architectures with almost every conceivable piece of software. There are almost no distributions that deal with non x86.
While this is great and commendable. 99% of people who run Linux use x86. The only real alternative to x86 for servers is PPC (Mac/IBM pSeries) or IBM System 390's. And if you're gonna run a Mac then just Run MacOS X. It's BSD it can do most things your Linux box can. And if you're going to spend all that money to buy an UltraSparc you should be forced to run Solaris as penance. ;) The only other reason to run on some other esoteric platform is that you're collecting old and slow computers out of interest or you're trying to add a new lease of life to your dilapidated SGI O2 that's now just running the printers. ;) Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

Oliver Jones wrote:
/When I ran Redhat, I was constantly irriated by having to find packages and installing them manually. Debian was great because I no longer hand to go and find all the stuff I needed, and keep it up to date with respect to security fixes./
To a certain extent Fedora is making this better. With a select few yum/apt repositories you get the same effect. Obviously if there is no repository for an app then it can be more involved to install. Fedora certainly doesn't have the depth of apps that debian enjoys yet. But this is purely a community size thing. The more people who join the Fedora (and related) packaging efforts the more packages there will be.
Having a distribution that has a "base", and several add on "repositories" that you could mix and match from would be very nice, although I'm not entirely sure how practical it would be given that you'd end up back in dependancy hell, except instead of having dependancy problems with individual packages, I've not got them between repositories (if I have say "base" + "Gnome" + "KDE", then if Gnome wants a newer version of some X library and KDE doesn't, then I'm hosed.)
/People say that Linux has a "dependancy hell" of trying to find the dependancies/versions for software that you want to install. These people must be running slackware or fedora where you don't have the huge package base to install from. In Debian if I needed to install the Redland RDF Parser, apt-get would install it, and all of it's dependancies, and then if a security flaw is found, it will be updated. Under Redhat (and now Fedora), anything extra I have to find and manage myself./
Not entirely true anymore. But the more off the beaten track you go this still tends to be the case. It's pretty rare now days that I go looking too far afield for apps I want on my desktop.
Ahh yes. Now, I don't care much about my desktop. I want xterms, mozilla, xchat, tkabber and something to play music (rythymbox/xmms/whatever). What I do care about is libraries for things I want to code for, and (mostly networking) programs. I've never really found Redhat to be anywhere near useful for this.
One of the more irritating things I find is the general lack of willingness from app developers to plug into common distributions in a sane fashion. The quality of rpm packaging by software developers leaves something to be desired in many cases.
I've only once or twice had people ask if I've got an "rpm" for a program I wrote. Almost always they want it for mandrake (?) and for software I've got .spec files for, noone has ever commented on their quality (or at least, hasn't mentioned it to me). I'm also not particularly amused with the idea of having to package the same software for Debian/Redhat/Mandrake/slackware/SUSE/etc. I package for what I use, if people want to package for other distros and send me a diff, I'm more than happy to include it. Noone has seriously complaind about this approach (other than people saying "You should package for other distros!" on princple). Mostly this is because, I suspect, I write software that people tend to want to compile from hand anyway.

Having a distribution that has a "base", and several add on "repositories" that you could mix and match from would be very nice, although I'm not entirely sure how practical it would be given that you'd end up back in dependancy hell, except instead of having dependancy problems with individual packages, I've not got them between repositories (if I have say "base" + "Gnome" + "KDE", then if Gnome wants a newer version of some X library and KDE doesn't, then I'm hosed.)
Not true. The dependancy graph should be like a tree. With dependancies going down and sideways only. Ie, KDE can obly depend on a package either A) supplied by a sibling repository or B) supplied by the base repository.
Ahh yes. Now, I don't care much about my desktop. I want xterms, mozilla, xchat, tkabber and something to play music (rythymbox/xmms/whatever). What I do care about is libraries for things I want to code for, and (mostly networking) programs. I've never really found Redhat to be anywhere near useful for this.
Well development is an entirely different kettle of fish. When you're developing an application you need to know what your target platform is. If your target platform doesn't provide dependencies directly then you need to either provide the dependencies statically or provide convenient ways of satisfying those dependencies to your target users. If your target users are Debian users then this isn't a problem. If you want your app to run on more than just Debian you have more work to do.
I've only once or twice had people ask if I've got an "rpm" for a program I wrote. Almost always they want it for mandrake (?) and for software I've got .spec files for, noone has ever commented on their quality (or at least, hasn't mentioned it to me).
Most people don't understand binary packaging issues well and as long as it just installs they tend to be satisfied.
I'm also not particularly amused with the idea of having to package the same software for Debian/Redhat/Mandrake/slackware/SUSE/etc. I package for what I use, if people want to package for other distros and send me a diff, I'm more than happy to include it. Noone has seriously complaind about this approach (other than people saying "You should package for other distros!" on princple). Mostly this is because, I suspect, I write software that people tend to want to compile from hand anyway.
It is all about your target audience and how niche your app is. The more relevant the app is to a broad user base and the better the user support a developer provides is the more likely the app will get used. OSS is not different from commercial software in this respect. OpenSource lets you pool effort though. You're only interested in packaging for debian because that is what you use. I on the other hand am more interested in RH (and derivatives). I generally can't be bothered with apps that I can't at least get a half decent src RPM for. If you wanted to increase the ease of install to more RedHat users you'd have to get someone on board the development team to help create good RPMs for your package and it's dependencies. Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

* Oliver Jones <oliver(a)deeper.co.nz> [2004-04-05 04:13]:
Having a distribution that has a "base", and several add on "repositories" that you could mix and match from would be very nice, although I'm not entirely sure how practical it would be given that you'd end up back in dependancy hell, except instead of having dependancy problems with individual packages, I've not got them between repositories (if I have say "base" + "Gnome" + "KDE", then if Gnome wants a newer version of some X library and KDE doesn't, then I'm hosed.)
Not true. The dependancy graph should be like a tree. With dependancies going down and sideways only. Ie, KDE can obly depend on a package either A) supplied by a sibling repository or B) supplied by the base repository.
That's what he said. The twist he added is that different repositories may demand different versions of their dependendencies in a shared parent repository. And then, indeed, he's hosed.
It is all about your target audience and how niche your app is. The more relevant the app is to a broad user base and the better the user support a developer provides is the more likely the app will get used.
That can't be argued about. See also <http://freshmeat.net/articles/view/992/>. -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

Not true. The dependancy graph should be like a tree. With dependancies going down and sideways only. Ie, KDE can obly depend on a package either A) supplied by a sibling repository or B) supplied by the base repository.
That's what he said. The twist he added is that different repositories may demand different versions of their dependendencies in a shared parent repository. And then, indeed, he's hosed.
Ah but the point is that if they both use the same base repository they wouldn't have version conflicts as there should only be one version of a dependency in the base repository. If the app required a different version of a dependent package than was in the base repository then the child repository would supply it in a way suitable for installation along side the other version. Or the base repository could also supply this dependency. This technique is used quite a bit these days. eg: [oliver] luna:~$ rpm -qa |sort|grep autoconf autoconf213-2.13-6 autoconf-2.57-3 [oliver] luna:~$ rpm -qa |grep libgal |sort libgal21-0.23-1 libgal2-1.99.10-2 libgal23-0.24-2 His example wasn't a problem that couldn't be addressed easily given the appropriate packaging structure. Though I do think that giving packages names with version numbers is a little naff. I would prefer for the same named rpms to just be installed side by side. Which you can do fine as long as there are no file conflicts in the package. Which is why you can easily install multiple kernel and kernel-source rpms on a box. eg: [oliver] luna:~$ rpm -qa |grep kernel|sort kernel-2.4.20-27.9 kernel-2.4.20-28.9 kernel-2.4.20-30.9 kernel-2.4.22-1.2115.nptl kernel-2.4.22-1.2174.nptl Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

Something that I forgot to address earlier...
pretty much identical. A DEC Alpha, x86-32, x86-64, Sun Sparc, they are all identical with slight hardware changes. I can push out a config to all of these machines and be reasonably confident that it will work on all of them.
How many non-x86-32 machines do you admin? For interests sake, does anyone here actually have a "production Linux server" (other than "Look, our flat has a SGI coffee table") running something other than x86? I'd like to see the BSD tiers applied to Debian, so it goes stable on i386, and then the developers for the 1% of platforms can sort it out without holding back the other 99%. By admining a non-x86 machine you've already demonstrated what hopefully equates to "more than a clue"...
I once saw the command that Debian should rename their "versions" from:
Stable -> Enterprise Testing -> Desktop Unstable -> Developer
Enterprise doesn't just imply "stability" (More often than not, it doesn't imply that at all! Enterprises rarely ever have one machine that does anything; failure of any part is fine as long as the cluster stays intact, how else do you think you get Windows uptimes longer than security patch releases?). Enterprise implies, to me, support for enterprise class hardware and situations, working with enterprise vendors. It would mean you need a relative amount of commerciality about your distribution, or at the very least, people who are working with companies that sell hardware and software. While as a sysadmin who would rather run Debian, it bugs me that commercial Linux-friendly hardware vendors say "SuSE & Red Hat" these days, I see the point. For a Linux vendor to certify their product on Debian, even on Stable, means 13 different platforms, for 1% of the admins..etc I'm told that commercial products on Red Hat don't even get support if you've recompiled the kernel; that doesn't mean you can't do it, but that if you want to go back to the company with an issue, they expect you to at least make sure that the issue isn't caused by your kernel changes. Testing, we go back to what Daniel said in a recent email; packages automatically go from unstable to testing in 10 days with no bugs. This means testing isn't much more than an old unstable, with just as many chances for there to be a random bug which your desktop user might find after a month. I feel bad about trusting servers to Fedora; we all moved away from Red Hat to Debian once upon a time anyway (hey, it was sold to our resident RHCE!), but the dissent is getting louder. I'd love to use White Box Enterprise, the SRPM-recompile of RHEL3, but then you get the same "it's as unsupported as Debian is" issue. Craig

I once saw the command that Debian should rename their "versions" from:
Stable -> Enterprise Testing -> Desktop Unstable -> Developer
Enterprise doesn't just imply "stability" (More often than not, it doesn't imply that at all! Enterprises rarely ever have one machine that does anything; failure of any part is fine as long as the cluster stays intact, how else do you think you get Windows uptimes longer than security patch releases?).
Call it 'production' then, if enterprise isn't the right bit of jargon for the job. The point of the above comment is that debian stable is suitable for your internet facing systems. What it doesn't say is that it's a slow moving beast, and in some cases that's great, and in others it's not.
I feel bad about trusting servers to Fedora; we all moved away from Red Hat to Debian once upon a time anyway (hey, it was sold to our resident RHCE!), but the dissent is getting louder. I'd love to use White Box Enterprise, the SRPM-recompile of RHEL3, but then you get the same "it's as unsupported as Debian is" issue.
A lot of the shift from redhat to debian was because of apt. I wouldn't have moved away if apt-rpm as it is with FC1 existed then. The shift was also at a time when up2date was in its infancy, and most of us didn't give it a fair chance. I put up with a stable potato release, but upgraded to testing (the nascent 3.0 woody release) when I needed newer packages on my home machine, and on my hosted server when it was "getting close" to completion (when you no longer had to upgrade glibc every week...) Whats this 'as unsupported as Deban is' issue you speak of? If it's the standard "who do you sue?" FUD perpetuated by the anti-FOSS types, then its all a load of crap anyway because for most software you couldn't "sue" the manufacturer anyway. The click-through license you ignore when you install most packages means you consent to giving up most of your rights with respect to damages incurred from use or misuse of their software, in whatever circumstances they arise. If you're referring to a paid support network, who uses them anyway? Has anyone on this list made use of RH's pay-per-view support network, and have they been satisfied with the response they got? Was it worth what you paid ? (I'm honestly interested, as I've never paid for support in this fashion. Any problems I've come across have been solvable with judicious use of google, friends, and sometimes a small amount of magic) I guess I'm asking 'Is "it's unsupported" actually that much of a problem?'. And I don't want to hear "yes, because my boss won't like it", I'd like to hear some *actual* examples of where it falls through.

A lot of the shift from redhat to debian was because of apt. I wouldn't have moved away if apt-rpm as it is with FC1 existed then. The shift was also at a time when up2date was in its infancy, and most of us didn't give it a fair chance. I put up with a stable potato release, but upgraded to testing (the nascent 3.0 woody release) when I needed newer packages on my home machine, and on my hosted server when it was "getting close" to completion (when you no longer had to upgrade glibc every week...)
I must admit that I've become very fond of up2date and it's ilk. I use yum now on my Fedora machines. Little point in using up2date as it's kinda RHN centric. Though it does provide a gui for the neophytes.
If you're referring to a paid support network, who uses them anyway? Has anyone on this list made use of RH's pay-per-view support network, and have they been satisfied with the response they got? Was it worth what you paid ? (I'm honestly interested, as I've never paid for support in this fashion. Any problems I've come across have been solvable with judicious use of google, friends, and sometimes a small amount of magic)
Similarly I've never paid for support like this. What I don't mind paying for with RHEL is the (hopefully) sound knowledge that RH isn't about to just vanish and that I'll continue to get errata packages to fix security flaws and bugs in the release of RHEL I'm using for the next five years. That is pretty much all the support I need. Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

* Oliver Jones <oliver(a)deeper.co.nz> [2004-04-05 04:23]:
What I don't mind paying for with RHEL is the (hopefully) sound knowledge that RH isn't about to just vanish and that I'll continue to get errata packages to fix security flaws and bugs in the release of RHEL I'm using for the next five years.
Of course you'd get that with Debian too, without a price tag attached. -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

pretty much identical. A DEC Alpha, x86-32, x86-64, Sun Sparc, they are all identical with slight hardware changes. I can push out a config to all of these machines and be reasonably confident that it will work on all of them.
How many non-x86-32 machines do you admin? For interests sake, does anyone here actually have a "production Linux server" (other than "Look, our flat has a SGI coffee table") running something other than x86?
At the moment none. However I have had in the past had to support Linux on non x86 hardware, and I've had to help people with FreeBSD/Solaris/OpenBSD on Intel/Other hardware. And given the option, having the same OS/Distro running on them all is a great benefit. (Ever tried killall -9 broken_program on solaris? It's amusing.)
I'd like to see the BSD tiers applied to Debian, so it goes stable on i386, and then the developers for the 1% of platforms can sort it out without holding back the other 99%. By admining a non-x86 machine you've already demonstrated what hopefully equates to "more than a clue"...
Then the other archs never release. What holds Debian releases back (from what I've seen) has usually been things like the Install floppies, which haven't been ready on any platform.
I once saw the command that Debian should rename their "versions" from:
Stable -> Enterprise Testing -> Desktop Unstable -> Developer
Enterprise doesn't just imply "stability" (More often than not, it doesn't imply that at all! Enterprises rarely ever have one machine that does anything; failure of any part is fine as long as the cluster stays intact, how else do you think you get Windows uptimes longer than security patch releases?).
Ok, Enterprise perhaps isn't the best term. What I see is a platform that is "static", "unchanging". I can rely on it being the same tomorrow as it was yesterday. If I'm administering something this is what I want.
Testing, we go back to what Daniel said in a recent email; packages automatically go from unstable to testing in 10 days with no bugs. This means testing isn't much more than an old unstable, with just as many chances for there to be a random bug which your desktop user might find after a month.
But, if there is a bug, then it's not pushed through to Testing. Testing and unstable are usually VERY different, otherwise people would just run Testing instead of Unstable (so they don't get the packages that have bugs in them). The reason for this is that for something to move into testing it has to wait some number of days (usually 10, but it does vary) and everything it depends on must be in testing, and everything required to build it must be in testing.
I feel bad about trusting servers to Fedora; we all moved away from Red Hat to Debian once upon a time anyway (hey, it was sold to our resident RHCE!), but the dissent is getting louder. I'd love to use White Box Enterprise, the SRPM-recompile of RHEL3, but then you get the same "it's as unsupported as Debian is" issue.
I've been burnt one too many times with Redhat's release cycle. Discovering you have to rewrite large applications because PHP now comes with register_globals off. Discovering that they've changed C++ compiler versions to a version that only exists in Redhat, so you need to recompile all your binary packages. Discovering that they've moved the directory of something, so that your files are no longer being referenced. Discovering that they've deprecated something all together so it's gone. I've ridden the waves when Redhat moves first. I remember moving to Glibc (oops! Nothing compiles anymore!), and PAM (Oops, forgot to clean up a lock file), or .... I eventually got irritated enough to move. I'm always keen to hear of alternatives, or if they have changed their ways enough to make me want to switch back.

Id just like to say thankyou.. this discussion is actually one of the best ive seen comparing distros... and this is something i maybe spending a lot more time in dealing with in the future.. I like parts of debian.. the distupgrade so you all of a sudden have a new distro, but with all your old stuff still there (if it keeps working) but as mentioned.. stuff seems older than in something like redhat.. ive always used, and liked redhat.. its worked.. there are the odd issue where they decide to do things differently to everyone else.. but dealing with those situations are few and far between. Im thinking ill try a fedora upgrade on my server (im using the term "server" fairly broadly as its not mission critical other than it being my border router / firewall / webserver for my home connection) and hopefully it wont break too much. Maybe easter some stage.. gives me some time to fix the broken bits.

Im thinking ill try a fedora upgrade on my server (im using the term "server" fairly broadly as its not mission critical other than it being my border router / firewall / webserver for my home connection)
On the "server" fedora isn't really very different from RH9. FC2 will be a much more significant update with the inclusion of the 2.6 kernel and SELinux etc. The difference between RH9 FC1 is like the difference between RH7.2 and RH7.3. Just a refinement. On the desktop though it is more different. The inclusion of Gnome 2.4 over 2.2 for a start. And I'm waiting with baited breath for Gnome 2.6 in FC2. And even more interested in seeing Gnome 2.8 in six months with Evolution 2, further SVG integration and other goodies. Regards -- Oliver Jones » Director » oliver(a)deeper.co.nz » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

* Perry Lorier <perry(a)coders.net> [2004-04-04 12:12]:
(Ever tried killall -9 broken_program on solaris? It's amusing.)
Hee. Does Solaris have pkill? That's what I've been using on Linux for the longest time now -- no killall for me, thanks.
What I see is a platform that is "static", "unchanging". I can rely on it being the same tomorrow as it was yesterday. If I'm administering something this is what I want.
Yeah, exactly. A platform which is bug-/security-fixed forever, without actual software updates further than what is backwards compatible with the status quo.
and everything it depends on must be in testing, and everything required to build it must be in testing.
Yep. Often overlooked in its impact. Things don't just drop down into testing all of their own.
Discovering you have to rewrite large applications because PHP now comes with register_globals off.
Which, in all fairness, is something that needed done. But the fault is really PHP's more so than RedHat's, and that's a rant for another time. In all fairness, again, RedHat have made some brazen moves that I believe needed doing (switching to UTF-8 entirely was the latest one -- unfortunately it broke a lot of stuff); much like I applaued Apple for making the first computer entirely without legacy hardware ("no serial ports and no floppy disk?!?"). It's inconvenient at the time, but it has to be done at some point for the whole of the community/ business/ whateveryouwanttocallit to move forward. Of course, on a server, that's exactly what you *don't* want to happen. Which was my point, that Debian is top choice for a server, but not the desktop and not for developers' machines (which is were those brazen changes should happen first and frequently). -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

A. Pagaltzis wrote: Before I reply, I'd like to thank you personally for all the time and effort you've put into our Wiki. We're in your debt :)
What I see is a platform that is "static", "unchanging". I can rely on it being the same tomorrow as it was yesterday. If I'm administering something this is what I want.
Yeah, exactly. A platform which is bug-/security-fixed forever, without actual software updates further than what is backwards compatible with the status quo.
Yeah. I really like that there is still active development on the 2.0 tree, it means that I can feel assured that if I develop anything for Linux then there will be supported for practically ever. I wish distros would do this, fix security flaws in all previous versions of their software, not that I ever expect anyone to ever do this for practical reasons.
Discovering you have to rewrite large applications because PHP now comes with register_globals off.
Which, in all fairness, is something that needed done. But the fault is really PHP's more so than RedHat's, and that's a rant for another time.
Yeah. However, it was right when Redhat were saying that they weren't going to support the old version anymore. "Upgrade or die!" which means that you suddenly have to rewrite a LOT of code.
In all fairness, again, RedHat have made some brazen moves that I believe needed doing (switching to UTF-8 entirely was the latest one -- unfortunately it broke a lot of stuff); much like I applaued Apple for making the first computer entirely without legacy hardware ("no serial ports and no floppy disk?!?"). It's inconvenient at the time, but it has to be done at some point for the whole of the community/ business/ whateveryouwanttocallit to move forward.
I'm really happy that someone is doing this. I love glibc, I love pam, and I would love to see UTF-8 apps. John McPherson and I spent quite a whle getting the wiki UTF-8 clean.
Of course, on a server, that's exactly what you *don't* want to happen. Which was my point, that Debian is top choice for a server, but not the desktop and not for developers' machines (which is were those brazen changes should happen first and frequently).
Exactly. I've not found a nice distro I like to code on yet, other than Debian Unstable. I keep hearing good things about Gentoo. Next time I have to reinstall my machine, I'll probably install Gentoo to give it a whirl.

Which, in all fairness, is something that needed done. But the fault is really PHP's more so than RedHat's, and that's a rant for another time.
Yeah. However, it was right when Redhat were saying that they weren't going to support the old version anymore. "Upgrade or die!" which means that you suddenly have to rewrite a LOT of code.
That isn't entirely true. While I've never used register_globals in my own code, for 3rd party code that does use it you can always use "php_flag register_globals on" in .htaccess or httpd.conf for that virtual host or directory. PHP has always been very good at retaining backwards compatibility. As an aside it's easy to spot shit 3rd part PHP. Configure your server with register_globals off and safe mode on. Most php apps die in a hideous way. For a long time now I've coded my PHP sites with both of those turned on. There are a number of those options you should setup as par for the course with PHP but that's something for a different discussion. Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

* Perry Lorier <perry(a)coders.net> [2004-04-04 13:34]:
Before I reply, I'd like to thank you personally for all the time and effort you've put into our Wiki. We're in your debt :)
Thanks. :) And thanks to you guys for your work on the wiki -- it wouldn't have been fun to do all that work if there hadn't been a well filled knowledge base to begin with. -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

Hee. Does Solaris have pkill? That's what I've been using on Linux for the longest time now -- no killall for me, thanks.
That's a really good point actually, and more people should be made aware of pkill. pkill -u username will kill all processes owned by that user. pkill -u username programname will kill all proceesses called programname owned by that user. So killing all of perry's bash processes is easy: pkill -u perry bash. If you just typed 'killall bash' you'd be likely to kill your own bash process as well!

You learn something everyday. I don't think I've ever used pkill before. I've done annoying ps aux | grep user| cut ... | kill hacks to do the same thing. I'll keep that one in mind for when I next need to clean up after evolution 1.4 on NFS shares before rebooting... :) Regards On Mon, 2004-04-05 at 08:48, Daniel Lawson wrote:
Hee. Does Solaris have pkill? That's what I've been using on Linux for the longest time now -- no killall for me, thanks.
That's a really good point actually, and more people should be made aware of pkill.
pkill -u username
will kill all processes owned by that user.
pkill -u username programname
will kill all proceesses called programname owned by that user.
So killing all of perry's bash processes is easy: pkill -u perry bash.
If you just typed 'killall bash' you'd be likely to kill your own bash process as well! _______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug
-- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

On 5/04/2004, at 2:37 PM, Oliver Jones wrote:
You learn something everyday. I don't think I've ever used pkill before. I've done annoying ps aux | grep user| cut ... | kill hacks to do the same thing. I'll keep that one in mind for when I next need to clean up after evolution 1.4 on NFS shares before rebooting... :)
pgrep is another useful one ... Cheers, James

Oliver Jones wrote:
You learn something everyday. I don't think I've ever used pkill before. I've done annoying ps aux | grep user| cut ... | kill hacks to do the same thing. I'll keep that one in mind for when I next need to clean up after evolution 1.4 on NFS shares before rebooting... :)
fuser -9 -k -v -m /mnt/nfs will send everything that is using /mnt/nfs deadly sig 9 :)

* Perry Lorier <perry(a)coders.net> [2004-04-05 05:09]:
fuser -9 -k -v -m /mnt/nfs
will send everything that is using /mnt/nfs deadly sig 9 :)
ObCaveat: you should not send SIGKILL unless the process refuses to react to friendlier signals. To quote the venerable Randal Schwartz: | No no no. Don't use kill -9. | | It doesn't give the process a chance to cleanly: | | 1) shut down socket connections | | 2) clean up temp files | | 3) inform its children that it is going away | | 4) reset its terminal characteristics | | and so on and so on and so on. | | Generally, send 15, and wait a second or two, and if that | doesn't work, send 2, and if that doesn't work, send 1. If | that doesn't, REMOVE THE BINARY because the program is badly | behaved! | | Don't use kill -9. Don't bring out the combine harvester just | to tidy up the flower pot. -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

A. Pagaltzis wrote:
* Perry Lorier <perry(a)coders.net> [2004-04-05 05:09]:
fuser -9 -k -v -m /mnt/nfs
will send everything that is using /mnt/nfs deadly sig 9 :)
ObCaveat: you should not send SIGKILL unless the process refuses to react to friendlier signals. To quote the venerable Randal Schwartz:
| No no no. Don't use kill -9. | | It doesn't give the process a chance to cleanly: | | 1) shut down socket connections | | 2) clean up temp files | | 3) inform its children that it is going away | | 4) reset its terminal characteristics | | and so on and so on and so on. | | Generally, send 15, and wait a second or two, and if that | doesn't work, send 2, and if that doesn't work, send 1. If | that doesn't, REMOVE THE BINARY because the program is badly | behaved! | | Don't use kill -9. Don't bring out the combine harvester just | to tidy up the flower pot.
I was about to ask if there was a command that does all that automatically for users who don't care (or wouldn't understand) about all that but just need to stop an errant program but then I remembered ksysguard. I wonder how the kill button in ksysguard does it? g -- Glenn Ramsey <glenn(a)componic.co.nz> 07 8627077 http://www.componic.co.nz

Daniel Lawson wrote:
That's a really good point actually, and more people should be made aware of pkill.
pkill -u username
will kill all processes owned by that user.
pkill -u username programname
will kill all proceesses called programname owned by that user.
So killing all of perry's bash processes is easy: pkill -u perry bash.
If you just typed 'killall bash' you'd be likely to kill your own bash process as well!
/me deletes his trusty old killuser.pl script You learn something new every day... -- Matthias

I've been burnt one too many times with Redhat's release cycle. Discovering you have to rewrite large applications because PHP now comes with register_globals off. Discovering that they've changed C++
register_globals off.... thank fucking god. Tho this was an upstream (PHP) change I think not RH.
compiler versions to a version that only exists in Redhat, so you need to recompile all your binary packages. Discovering that they've moved the directory of something, so that your files are no longer being referenced. Discovering that they've deprecated something all together so it's gone. I've ridden the waves when Redhat moves first. I remember moving to Glibc (oops! Nothing compiles anymore!), and PAM (Oops, forgot to clean up a lock file), or .... I eventually got irritated enough to move. I'm always keen to hear of alternatives, or if they have changed their ways enough to make me want to switch back.
I say stay with what you're happiest or most familiar with. I'm most familiar with RH (now Fedora) so I'm sticking with it and in my opinion it has only gotten better with time. True that have been a number of times when they have done things that break away from old behaviour but in general it has always been for the betterment of the OS in general. I can live with change. It makes for a more interesting ride. Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

* Oliver Jones <oliver(a)deeper.co.nz> [2004-04-05 04:27]:
True that have been a number of times when they have done things that break away from old behaviour but in general it has always been for the betterment of the OS in general. I can live with change. It makes for a more interesting ride.
As a developer, that's a most laudable stance. As an admin (whose boxen are mission critical), it's almost irresponsible. -- Regards, Aristotle "If you can't laugh at yourself, you don't take life seriously enough."

On Tue, 2004-04-06 at 01:33, A. Pagaltzis wrote:
* Oliver Jones <oliver(a)deeper.co.nz> [2004-04-05 04:27]:
True that have been a number of times when they have done things that break away from old behaviour but in general it has always been for the betterment of the OS in general. I can live with change. It makes for a more interesting ride.
As a developer, that's a most laudable stance. As an admin (whose boxen are mission critical), it's almost irresponsible.
Well most of the boxes I admin are either before RHEL's time and have been running what they were installed with for a long long time (eg, RH7.1) or if they are a server and run RHEL or a desktop and run FC1 or RH9. As a developer I also prefer a stable platform on which to deploy an application. Hence our deployment platform for the apps we're currently writing (in Java) is RHEL 2.1 as that was what was available at the time of the decision. Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

My 0.000001 cents worth. /begin rant/ I think somewhere along the way the whole linux community lost the plot. Back in the good old days you could get a decent linux system running on a 386 with 8megs of RAM and 250MB of HDD. Now it seems you need the latest and greatest windows type hardware specs to just install!!!! Now onto debian. I like the fact that debian is holding back on the idea of jumping forward and not trying to be one step ahead of mickysoft. What this does is allows you make a fairly good system with what ever you have lying around, i.e. 286, 386 & 486 type systems. Why does a simple file server / gateway / mail / nntp / print server need to have high hardware specs?? when you can put an OS on that does not need it. People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency. In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>. /end rant/

People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency.
If you want an old computer, run an old OS. For example, there's nothing that says you can't get a pentium 100 (about the lowest common denominator of 'computer' these days - as opposed to 'historical novelty') and run an old version of debian/RH/ or windows NT 4 if you're that way inclined on it.
In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>. /end rant/
Actually I think you'll find that 'Mickysoft' don't give a flying f__k about your old pentium 100/486. They're out to sell software, and what sells software is people wanting something new and flash and useful which they didn't have before. People buy new PC's because they want something which their old PC didn't give them - the ability to do some new task (like play games, or support larger hard drives/more drives/more ram, etc), or some existing task faster than what they had. Or just because their old PC is broken. They throw old PC's away, or off the roof at parties, or deal to them with a sledgehammer. The linux distro community is also out to sell software. They don't charge for it, but the end goal is the same - to get their particular product deployed on some end user's computer. Aiming your target market at things which people are putting in the bin is - no matter which way you look at it - stupid. There, I feel satisfied that I have fed the troll. </endrant> :-) -- Orion ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.

At 12:41 6/04/2004, you wrote:
People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency.
If you want an old computer, run an old OS. For example, there's nothing that says you can't get a pentium 100 (about the lowest common denominator of 'computer' these days - as opposed to 'historical novelty') and run an old version of debian/RH/ or windows NT 4 if you're that way inclined on it.
It is all about putting something to good use. The "historical novelties" you are refering too can preform the same tasks as the "latest and greatest" system, but with about 1/10th of everything. It is also interesting to note that the latest RH and debian all do the same things as the older versions, but need twice or more the resources to do it...
In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>. /end rant/
Actually I think you'll find that 'Mickysoft' don't give a flying f__k about your old pentium 100/486. They're out to sell software, and what sells software is people wanting something new and flash and useful which they didn't have before.
Once again "updating for the sake of updating" applies, sure there some genuine new stuff that offer something that is not just a fresh paint job. But can the same be said for the SERVER environment. Since you mentioned mickysoft, allot of people are still running NT3.5 & NT4 on there OLD pentium platforms, as a server, and in same cases as a work station, and shock horror mickysoft still support them.
People buy new PC's because they want something which their old PC didn't give them - the ability to do some new task (like play games, or support larger hard drives/more drives/more ram, etc), or some existing task faster than what they had. Or just because their old PC is broken. They throw old PC's away, or off the roof at parties, or deal to them with a sledgehammer.
True, I have tossed several compaq workstations from the top of a tall building.
The linux distro community is also out to sell software. They don't charge for it, but the end goal is the same - to get their particular product deployed on some end user's computer. Aiming your target market at things which people are putting in the bin is - no matter which way you look at it - stupid.
Well, it is true for the end user, but I was talking about servers... And yes, putting a server you paid $8,000+ for 3-4 years ago in the bin is a bit stupid, just beacuse the "latest and greatest" OS that does the same as what you have all ready got will not run on it....
There, I feel satisfied that I have fed the troll. </endrant>
Burp...
:-)
-- Orion
---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. _______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug

Once again "updating for the sake of updating" applies, sure there some genuine new stuff that offer something that is not just a fresh paint job. But can the same be said for the SERVER environment.
Since you mentioned mickysoft, allot of people are still running NT3.5 & NT4 on there OLD pentium platforms, as a server, and in same cases as a work station, and shock horror mickysoft still support them.
Oh Really? http://www.microsoft.com/windows/lifecycle/default.mspx And reed the "see note" bit. Please stop with the unfounded guesswork.
Well, it is true for the end user, but I was talking about servers...
OK, if you want to take that approach... Name brand server hardware is supplied by most vendors with an expected lifetime of 3 years. Maybe 5 tops. After that, you expect a high level of hardware failures. From that angle alone, I'd not be bothering with old hardware in a server environment. Are you for real, or are you trolling? If the latter, please stop, or feel free to join #wlug, where we are eager and waiting to tear you a new one! :) -- Greig McGill

/begin rant/ Back in the good old days you could get a decent linux system running on a 386 with 8megs of RAM and 250MB of HDD.
Show me a 386 with 8megs of RAM and a 250mb HDD. In the year and a half I've been at my job, the lowest specced machine I have seen was a Pentium 75. I have thrown out/found new homes for about two dozen Pentium 2/Cyrix class machines. Hell, if you're that desperate for computing power, I have a P2-400 machine I can donate to you, or a Celeron 500 motherboard/PSU combo. Hell, there's someone on Trade Me right now with a 486SX33 for $0.50. Buy it and move on. Just because Linux can run on a 386 doesn't mean it -should-. Now, lets say for a second that you absolutely have to run Linux on a 386. Start at http://dilbert.physast.uga.edu/~andy/minilinux.html. You can run it off a floppy disc if you like. Just because you expect a modern OS to run on 15 year old hardware doesn't mean you have to hold everyone back!
I like the fact that debian is holding back on the idea of jumping forward and not trying to be one step ahead of mickysoft. What this does is allows you make a fairly good system with what ever you have lying around, i.e. 286, 386 & 486 type systems.
Linux does not and has never worked on a 286. I would expect Woody to install on a 486 (remember that it still uses the 2.2 kernel by default). Do a base install and nothing else.
People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency.
There is no gui dependency. You can install as little or as much as you need; if you don't need X on your gateway router, don't put it there. Snapgear routers run Linux on what is probably an embedded 486/K5 processor; you can run that on
In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>.
The name of the company is "Microsoft". If you want people to have any respect for Linux, people like you need to make a conscious effort to drop the little slights against them. There are plenty of OS's that will work on the specs you mention above. Before Windows 3.0 there was a product called GeoWorks Ensemble. Look at the gallery at http://www.aci.com.pl/mwichary/guidebook/interfaces/geos/geoworks/gwe2. It ran on a 286 and is still commercially available, and will probably still run on a 286. The Fedora team are not targeting throwaway hardware, they are targeting a usable desktop environment for modern tasks on modern hardware. If you don't have much RAM but still want modern performance, get a smaller X server, use FluxBox or XFCE instead of GNOME or KDE etc, and you're there. Don't complain about progress, just ignore it. Linux is all about that choice. You have the source, go rip out all the stuff you don't want. However, it doesn't do a lot of things you might like to do these days; you know, web, email etc. If you don't care, keep running it. If you do care, you might just have to upgrade. What, no Doom 3 on my XT? Craig

I had several early kernels from the 1.x range that ran on an XT. I also had an early 2.x kernel that would run on a 286. I was running a full NAT gateway with SMTP, NNTP and various other mail type services on such a system back when Linux was still an "educational development" OS. Once again I get the hint you are talking about a desktop environment? and not a server environment? Also I am not trying to compare Linux with Windows, as there is no point, they both do different things for different people and it is only in the eyes of others that they are compared as they can do the same basics tasks. Also why should it not run on a 386? as in some situations, as you mention, a 386 is just spot on for the job. Even if you ignore the GUI on Fedora it still needs 16M of RAM min. to support a command prompt.. As it most cases, people are in too much of a hurry to charge forward, and as a result things get broken. In the original thread that was the jist of what was being discussed, and I am pointing out what I think is one of the major driving forces behind the charge forward. And from what I have seen in replies it has confirmed my assumption. At 13:23 6/04/2004, you wrote:
/begin rant/ Back in the good old days you could get a decent linux system running on a 386 with 8megs of RAM and 250MB of HDD.
Show me a 386 with 8megs of RAM and a 250mb HDD. In the year and a half I've been at my job, the lowest specced machine I have seen was a Pentium 75. I have thrown out/found new homes for about two dozen Pentium 2/Cyrix class machines. Hell, if you're that desperate for computing power, I have a P2-400 machine I can donate to you, or a Celeron 500 motherboard/PSU combo.
Hell, there's someone on Trade Me right now with a 486SX33 for $0.50. Buy it and move on. Just because Linux can run on a 386 doesn't mean it -should-.
Now, lets say for a second that you absolutely have to run Linux on a 386. Start at http://dilbert.physast.uga.edu/~andy/minilinux.html. You can run it off a floppy disc if you like. Just because you expect a modern OS to run on 15 year old hardware doesn't mean you have to hold everyone back!
I like the fact that debian is holding back on the idea of jumping forward and not trying to be one step ahead of mickysoft. What this does is allows you make a fairly good system with what ever you have lying around, i.e. 286, 386 & 486 type systems.
Linux does not and has never worked on a 286.
I would expect Woody to install on a 486 (remember that it still uses the 2.2 kernel by default). Do a base install and nothing else.
People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency.
There is no gui dependency. You can install as little or as much as you need; if you don't need X on your gateway router, don't put it there. Snapgear routers run Linux on what is probably an embedded 486/K5 processor; you can run that on
In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>.
The name of the company is "Microsoft". If you want people to have any respect for Linux, people like you need to make a conscious effort to drop the little slights against them.
There are plenty of OS's that will work on the specs you mention above. Before Windows 3.0 there was a product called GeoWorks Ensemble. Look at the gallery at http://www.aci.com.pl/mwichary/guidebook/interfaces/geos/geoworks/gwe2. It ran on a 286 and is still commercially available, and will probably still run on a 286.
The Fedora team are not targeting throwaway hardware, they are targeting a usable desktop environment for modern tasks on modern hardware. If you don't have much RAM but still want modern performance, get a smaller X server, use FluxBox or XFCE instead of GNOME or KDE etc, and you're there. Don't complain about progress, just ignore it. Linux is all about that choice. You have the source, go rip out all the stuff you don't want.
However, it doesn't do a lot of things you might like to do these days; you know, web, email etc. If you don't care, keep running it. If you do care, you might just have to upgrade. What, no Doom 3 on my XT?
Craig
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug

DrWho? wrote:
I had several early kernels from the 1.x range that ran on an XT.
I also had an early 2.x kernel that would run on a 286. I was running a full NAT gateway with SMTP, NNTP and various other mail type services on such a system back when Linux was still an "educational development" OS.
Linux kernels? The Linux kernel (right back to 0.1) is totally incapable of running on anything less than a 386. Perhaps you're thinking of Minix?

Have a look at KA9Q NOS by Phil Karn. He did one of the early 1.x kernels for an XT, in fact I think his memory management was put into the kernel tree at the time. Not long after that a fork from the KA9Q NOS was done for a 286 using the 2.0 kernel tree. Single floppy type job, but a pain to put on HDD. Since this was all done in the real early days, not much of it has survived, except on the old HDD tucked away in the back shed or in old ham radio publications of the time. I belive a few are still running on hill tops in the US, with the original hardware acting as gateway nodes. The NAT was very good, pity no one really saw the benefit of it at the time, and it was very tricky to setup with the AX25 bridging. Later forks, i.e. JNOS and TNOS etc... became part of the Ham radio features of the kernel. At 14:21 6/04/2004, you wrote:
DrWho? wrote:
I had several early kernels from the 1.x range that ran on an XT.
I also had an early 2.x kernel that would run on a 286. I was running a full NAT gateway with SMTP, NNTP and various other mail type services on such a system back when Linux was still an "educational development" OS.
Linux kernels?
The Linux kernel (right back to 0.1) is totally incapable of running on anything less than a 386. Perhaps you're thinking of Minix?
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug

Have a look at KA9Q NOS by Phil Karn.
For those that don't know, KA9Q was a TCP/IP stack for DOS (and apparently Amiga too, thanks Mandrake). I looked into it for BBS usage at one point, back in the when. Check out http://www.ka9q.net/code/ka9qnos/ for a bit of information.
Since this was all done in the real early days, not much of it has survived, except on the old HDD tucked away in the back shed or in old ham radio publications of the time.
How convienient :)

It was done for a few more platforms then just Amiga. Yes it ran on DOS, in fact that was where it started out, with bits lifted from the Linux Kernel of the time. The XT linux version I was looking at was based on the project mentioned here: http://groups.google.co.nz/groups?hl=en&lr=&ie=UTF-8&threadm=4ngsi3INNd15%40seurat.syd.dit.csiro.au&rnum=4&prev=/groups%3Fq%3D%252Blinux%2B%252Bxt%2B%252Bka9q%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3D4ngsi3INNd15%2540seurat.syd.dit.csiro.au%26rnum%3D4 It was a bugger getting the network card going!!! And no it was not a minix clone... Someone in finland took over the Ham Radio port and intergrated the KA9Q package into it. But all the URLs that point to it are dead. The 286 version was based on the ELKS project, found here: ftp://ftp.ecs.soton.ac.uk/pub/elks/ It was done by a german chap... and docs were very hard to understand. At 14:50 6/04/2004, you wrote:
Have a look at KA9Q NOS by Phil Karn.
For those that don't know, KA9Q was a TCP/IP stack for DOS (and apparently Amiga too, thanks Mandrake). I looked into it for BBS usage at one point, back in the when. Check out http://www.ka9q.net/code/ka9qnos/ for a bit of information.
Since this was all done in the real early days, not much of it has survived, except on the old HDD tucked away in the back shed or in old ham radio publications of the time.
How convienient :)
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug

DrWho? wrote:
It was done for a few more platforms then just Amiga.
Yes it ran on DOS, in fact that was where it started out, with bits lifted from the Linux Kernel of the time.
The XT linux version I was looking at was based on the project mentioned here: http://groups.google.co.nz/groups?hl=en&lr=&ie=UTF-8&threadm=4ngsi3INNd15%40seurat.syd.dit.csiro.au&rnum=4&prev=/groups%3Fq%3D%252Blinux%2B%252Bxt%2B%252Bka9q%26hl%3Den%26lr%3D%26ie%3DUTF-8%26selm%3D4ngsi3INNd15%2540seurat.syd.dit.csiro.au%26rnum%3D4
That's not the 'standard' Linux kernel. It's only fair that you compare it against a similarly cut-down Microsoft product, WinCE or perhaps MSDOS, which is capable of running on much lower-spec hardware than the standard XP desktop, obviously.

I don't think this discussion is actually heading towards any useful outcome, other than perhaps that of nostalgia. We are in the situation where we have an otherwise good distribution (debian) that is not keeping up with current releases in a stable and reliable fashion, simply because the depth/breadth of the distribution is so great, that fixing all the release critical bugs, for all architectures is a such a massive task, that releases can only (practically) happen every couple of years. We have a second distribution, possibly at the other end of (at least one of) the scale(s), which, due to it's lack of depth/breadth is able to provide stick to a much tighter release cycle. But it doesn't stop here. Both distributions have a 'length' if you will. That is, the length of time in which they are practically run-able on a system. The greater this 'length' parameter, the bigger range of specs the distribution will support. Now, the crux of the matter is that if we take these three parameters, possibly more, and work out the net-volume of the distribution, we can approximate the amount of work that is required to get the distribution to that state. You can imagine that debian requires a huge amount of work, as it has a greater length, depth and breadth than Fedora; however this does not mean that is a better distribution. In order to work out what is the most suitable distribution to use, you must consider what parameters you require from that distribution. For most users, who have reasonably up to date PC's, they require a reasonably up-to-date distribution, with moderate application depth and very little need to run on antiquated hardware. Now I concede there are several obvious flaws in my metrics, but I maintain they describe the problem well enough to explain the major constraints affecting operating platforms (OS + Software). Lets cut the crap, Fedora is not going to run on a 386, it operates in a completely different part of the usage space. The majority of Linux users would consider it an utter travesty if Fedora were to make the colossal effort required to have the OS run on hardware that is equally well served by other distributions, rather than spend the time keeping up with the current hardware releases and enhancing the performance of existing drivers, etc. You must realise that advancing is where the operating system will generate the greater value, not only to the new users who have a nice stable system, but to the existing userbase who are able to reap the benefits of a broader userbase. Hell, you might even get ncurses/readline versions of the new and improved applications written for the expanded userbase! In short... If debian wants to keep up to date, they need to either a) Reduce Package count b) Reduce Architecture count c) Increase work input (more developers) d) Reduce support for older hardware, however this is somewhat tied in with architecture support, and is less of a burden on it's own. Pretty simple stuff, and already suggested iirc. Regards James

Also why should it not run on a 386? as in some situations, as you mention, a 386 is just spot on for the job.
There are plenty of linux distrubutions that will do this. It's neither here nor there if Debian doesn't (I have no idea if it will or not either). Linux is linux. It doesn't matter how its branded. It's still the same thing. Linux *will* run on an embedded 386 processor, and it *will* run on a quad 3 GHz processor or an IBM Mainframe. What you can do with linux running on those different platforms is another issue entirely. If you want to run a caching proxy server on a 386 with 8 MB of ram and a 300 MB 3200 rpm HDD, go ahead, but it'll be faster to use no proxy at all unless your Internet connection is a 14k4 dialup. If you want to run a web, email, domain and file sharing server for a network of 50 machines, you can use a 386 with 8 MB if you like, although I think you'll find it dies very shortly after the third or so client connects. If you want to run a environmental monitor that sends notifications when the temperature or humidity goes past a certain threshold, on a 386 with 8 MB of ram, then go ahead, it'll do fine. If you want to run a text login terminal off the same 386, then go ahead. It'll do fine.
Even if you ignore the GUI on Fedora it still needs 16M of RAM min. to support a command prompt..
So don't use Fedora. What's the problem here? FC1 is a recent distro aimed at recent hardware. There's plenty of other distrubutions who cater to older hardware. Go use them. It's still linux.
In the original thread that was the jist of what was being discussed, and I am pointing out what I think is one of the major driving forces behind the charge forward. And from what I have seen in replies it has confirmed my assumption.
Certainly the support for newer hardware is a driving force, however it doesn't in any way limit the lower end of usable computers. What *does* limit the lower end of usable computers is what you want to do with them. The original discussion wasn't actually anything to do with hardware, or about resource use. It was about versions of software. Resource use is incidental, and for the most part completly ignored. Why is that? Because most machines will run everything anyway, just with different performance. Computers are ridiculously powerful these days, and for most applications they are far more than is needed. To go back to your discussion, you keep referring to servers, and yet bring up this 386 as an example. There's a couple of very good reasons I wouldn't use a 386 as a server unless I couldn't avoid it: Reliability and performance. I'm more than willing to put in old hardware where appropriate -- and i'll run the latest Debian release on it as well, and it'll run just fine -- but I'm realistic about expectations. I don't want to keep replacing someones 486 firewall every time it dies. This doesn't mean there aren't uses for old hardware, and it also doesn't mean that you can't use the old hardware as you like. So you can't install FC1 on a 386. Big deal. Use another distribution. Linux is linux. It doesn't matter what name it comes under.

On Tue, 2004-04-06 at 14:02, DrWho? wrote:
I had several early kernels from the 1.x range that ran on an XT.
Bullshit. Until uCLinux, Linux _required_ an MMU. An XT does not have an MMU. Get your facts straight.
I also had an early 2.x kernel that would run on a 286. I was running a
Utter twaddle. See previous rebuttal. Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

DrWho? wrote:
I had several early kernels from the 1.x range that ran on an XT.
I'm curious about this The mainstream linux kernel has always had the requirement of a MMU, which means that it has always required a 386 or better. ELKS and uCLinux run on machines that don't have an MMU however they are severely crippled and didn't appear until much later. The original post announcing linux availability (http://groups.google.com/groups?selm=1991Oct5.054106.4647%40klaava.Helsinki....) mentions that it requires a 386. The 1.0 changes file (http://www.kernel.org/pub/linux/kernel/v1.0/CHANGES) doesn't mention support for any other processors. I believe Linux 1.2 was the first ("stable") version to offer support for non-386 class machines (it supported the DEC Alpha).
I also had an early 2.x kernel that would run on a 286. I was running a full NAT gateway with SMTP, NNTP and various other mail type services on such a system back when Linux was still an "educational development" OS.
The official 2.x kernels still required a 386.

I'm sure soekris pc's aren'ts that much more powerful then 486's .. and they have how many Mb RAM ? .. oh and they run on how many Mb FlashCards ? If you want to see minimal linux installs, ask any soekris biscuit board owner about theirs..
-----Original Message----- From: DrWho? [mailto:x_files_(a)ihug.co.nz] Sent: Tuesday, April 06, 2004 12:24 PM To: Waikato Linux Users Group Subject: Re: [wlug] Renaming Debian platforms
My 0.000001 cents worth.
/begin rant/ I think somewhere along the way the whole linux community lost the plot.
Back in the good old days you could get a decent linux system running on a 386 with 8megs of RAM and 250MB of HDD.
Now it seems you need the latest and greatest windows type hardware specs to just install!!!!
Now onto debian.
I like the fact that debian is holding back on the idea of jumping forward and not trying to be one step ahead of mickysoft. What this does is allows you make a fairly good system with what ever you have lying around, i.e. 286, 386 & 486 type systems.
Why does a simple file server / gateway / mail / nntp / print server need to have high hardware specs?? when you can put an OS on that does not need it.
People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency.
In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>. /end rant/

On Tue, 6 Apr 2004, Drew Broadley wrote:
I'm sure soekris pc's aren'ts that much more powerful then 486's
486class 100MHz embedded AMD processor
.. and they have how many Mb RAM ?
64Mb
.. oh and they run on how many Mb FlashCards ?
We run 64Mb (using ~16 - 20mb of it atm)
If you want to see minimal linux installs, ask any soekris biscuit board owner about theirs..
We run a very minimal install, but, what software we do run is very close to the latest version. I agree with Drew on this, you can still run the latest versions of many services and applications even on machines as minimal as these, you just pick appropriate things to do on a machine this small. Jamie

Drew Broadley wrote:
I'm sure soekris pc's aren'ts that much more powerful then 486's .. and they have how many Mb RAM ? .. oh and they run on how many Mb FlashCards ?
If you want to see minimal linux installs, ask any soekris biscuit board owner about theirs..
The Soekris machines we use here have 64MB of Memory, and we use 64MB Flashcards. They are AMD Elan's (486ish) running at 133Mhz. They run a custom distribution which we developed based very loosely on Debian. The flash cards (IIRC) are only about 50% full. Most of the slimming came from using busybox for almost everything, and stripping the 90MB Glibc down (Glibc itself is ~1MB, but the Locale/zoneinfo/etc files total about 90MB) While it took quite some effort to strip down Linux for these machines, I wouldn't expect a standard distribution to run on them. A Soekris doesn't have a VGA display, nor a keyboard/mouse (it's serial console only). Because they are running off flash, we make considerable effort to keep the file system mounted readonly at all times. You can see a Soekris here: http://www.crc.net.nz/gallery/rch_install/DSC02841

My 0.000001 cents worth. /begin rant/ I think somewhere along the way the whole Unix community lost the plot. Back in the good old days you could get a decent Unix system running on a PDP-7 with 128k of RAM and an 8" floppy disk. now it seems you need the latest and greatest Sun Microsystems type hardware specs just to install!!!! Now onto System III. I like the fact that DEC is holding back on the idea of jumping forward and not trying to be one step ahead of Sun. What this does is allows you make fairly good systems with what ever you have lying around, i.e. PDP-7, PDP-10, and PDP-11 type systems. Why does a simple TTY server / uucp gateway / print server need to have high hardware specs?? when you can put an os on that does not need it. People need to drop the "update and upgrade for the sake of it" attitude, and like me, break away from the Teletype dependency. In fact, I would like to challenge the System V team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for Sun <grin>. /end rant/ On Tue, 2004-04-06 at 12:23, DrWho? wrote:
My 0.000001 cents worth.
/begin rant/ I think somewhere along the way the whole linux community lost the plot.
Back in the good old days you could get a decent linux system running on a 386 with 8megs of RAM and 250MB of HDD.
Now it seems you need the latest and greatest windows type hardware specs to just install!!!!
Now onto debian.
I like the fact that debian is holding back on the idea of jumping forward and not trying to be one step ahead of mickysoft. What this does is allows you make a fairly good system with what ever you have lying around, i.e. 286, 386 & 486 type systems.
Why does a simple file server / gateway / mail / nntp / print server need to have high hardware specs?? when you can put an OS on that does not need it.
People need to drop the "update and upgrade for the sake of it" attitude, and like me, brake away from the GUI dependency.
In fact, I would like to challenge the Fedora team to make a distro that would work on the specs I have mentioned above as THAT WOULD be a kick in the pants for mickysoft <grin>. /end rant/
______________________________________________________________________
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug
-- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

I feel bad about trusting servers to Fedora; we all moved away from Red Hat to Debian once upon a time anyway (hey, it was sold to our resident RHCE!), but the dissent is getting louder. I'd love to use White Box Enterprise, the SRPM-recompile of RHEL3, but then you get the same "it's as unsupported as Debian is" issue.
I say if your customers or budget can support it use RHEL. -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

Don't panic too much about RH9 being "End of Lifed" by RedHat. The Fedora Legacy project will pick up security fix packaging duties. They do the same for RH 7.2, 7.3 & 8.0. Check out www.fedoralegacy.org. Or alternatively switch to Fedora Core 1.The upgrade process from RH9 is pretty painless. Regards A. Pagaltzis wrote:
* Kyle Carter <kyle(a)feet.net.nz> [2004-04-03 15:16]:
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
On a server, this matters.. why? The security updates are very current, even if the version numbers are not. I wouldn't run Debian on a desktop, but for servers it's my #1 choice by far.

If your RH9 box doesn't use non-redhat packages it'll be a pretty simple excercise. I did one by hand using yum. I posted about the experience on the list. I fucked it up tho and it took me ages to recover but I've been running it since with little trouble. Regards On Sat, 2004-04-03 at 21:38, Kyle Carter wrote:
NOOOOOOOOOO
this means another server rebuild... which is never a lot of fun.
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
debian seems to be the shnizzle for keeping a box going through major system upgrades..
i wonder how badly my redhat 9 machine will break doing a fedora upgrade
at least i dont have to setup pppoe again...
-- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

Kyle Carter wrote:
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
Now, hang on. When we talk about "debian", do we mean stable, testing or unstable? Other distros seem comparable to unstable, as opposed to stable, where packages are tested for months (or years) before inclusion.

Well, I assumed it went without saying that no one would run a production server on anything other than stable... Clearly not. Assume any ref. to "Debian" from me is actually referring to "Debian stable (woody)". G. ----- Original Message ----- From: "Jason Le Vaillant" <jfl2(a)myrealbox.com> To: "Waikato Linux Users Group" <wlug(a)list.waikato.ac.nz> Sent: Sunday, April 04, 2004 1:33 PM Subject: Re: [wlug] [Fwd: End of Life for Red Hat Linux 9]
Kyle Carter wrote:
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
Now, hang on. When we talk about "debian", do we mean stable, testing or unstable?
Other distros seem comparable to unstable, as opposed to stable, where packages are tested for months (or years) before inclusion.
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug

Greig McGill wrote:
Well, I assumed it went without saying that no one would run a production server on anything other than stable...
Well, that is puzzling, because people are talking about upgrading to the latest Fedora, in preference to Debian. But I understand Fedora uses the latest versions of apps? This would seem to make it equivalent to Debian unstable (and just as unsuitable for production servers). I've never actually run Fedora though (and I haven't run Redhat for years), so perhaps there's some flexibility I don't know about.

Well, that is puzzling, because people are talking about upgrading to the latest Fedora, in preference to Debian. But I understand Fedora uses the latest versions of apps? This would seem to make it equivalent to Debian unstable (and just as unsuitable for production servers). I've never actually run Fedora though (and I haven't run Redhat for years), so perhaps there's some flexibility I don't know about.
Red Hat still run a tree called "Rawhide", which is "whatever we're up to at the moment". That is analagous to Debian Unstable. With both distributions, every now and then, the tree is stablised, tested and released. The difference is that for Fedora Core it's every 6 months, and for Debian is it every few years. The problem that is being pointed out is that people want to run newer things than from when Woody (the current stbale Debian release) was released. If Perry wants to install his RDF library, he eithers gets the feature set from 3 years ago with security fixes, or in the case that the package is new, it simply does not exist in stable. The other option is backports of testing/unstable software to Woody, and we all have horror stories we can tell you about that. This is, of course, if you want to deal with only the software that is packaged by your distribution. I saw someone a couple of days ago announce that a backport of Gnome 2.2 was now available for Woody. Just after the release of Gnome 2.6. People who package software aren't targeting Debian any more, and it's arguable that Debian's relevance is only decreasing. Maybe if someone made New Whizz-Bang Up To Date Thing with a woody target, we'd be ok; however free software builds on other free software, and whizz-bang things tend to rely on newer versions of software than what woody provides. The latest version of apps isn't a bad thing; things move on, features get added. (Obligatory "My Servers Run Woody" disclaimer) Craig

Personally I find people harping on about running Debian on "production servers" fairly laughable. Nothing about old code makes it better to run on "production servers" than newer code. The only thing one could perhaps point out is that older code has been running longer and hopefully is better tested and has less bugs. But this isn't actually always true. If you have a server that you want to setup and forget about, then yes you want a stable (as in non-changing) platform to run it on. Now this could be any distribution you like just as long as for the lifetime of the server's usefulness you have a source for bug & security fixes. No code is perfect, there will always be flaws and what you want is for those flaws to be identified and fixed over this lifetime. This support can come from a company, the community or in house developers. Obviously in-house security support is more expensive in time/money. The reason RH has done away with it's stock RedHat Linux product and gone for the Enterprise line is because of this cost. If you have lots of releases often it gets more and more difficult to support them for long periods. So now they have ~1.5 yearly releases of RHEL and support them for 5 years and this support costs you a couple hundred USD a year. Money companies should be happy to pay in lieu of paying people in house to provide that security/bug support. I guess you could treat Debian stable like RHEL. It's slow moving and provides a static system spec on which to build an app or service. But just don't expect it to be wizz bang and modern. If you server needs modern features and you want long term package support then you're pretty much stuck with RHEL 3 until a new version of Debian stable appears. Another way of lowering the cost of security updates et al and yet retain modern features is to get a modern distro like Fedora Core and cut it down to the absolute minimum packages necessary to run you app or service. And then harden the network and OS like crazy. The less there is running on a box the less there is to go wrong. If you want a mail server then just make it run mail and nothing else. This makes it less expensive to track a moving target like FC. Regards On Sun, 2004-04-04 at 16:54, Jason Le Vaillant wrote:
Greig McGill wrote:
Well, I assumed it went without saying that no one would run a production server on anything other than stable...
Well, that is puzzling, because people are talking about upgrading to the latest Fedora, in preference to Debian. But I understand Fedora uses the latest versions of apps? This would seem to make it equivalent to Debian unstable (and just as unsuitable for production servers). I've never actually run Fedora though (and I haven't run Redhat for years), so perhaps there's some flexibility I don't know about.
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug
-- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

Well, that is puzzling, because people are talking about upgrading to the latest Fedora, in preference to Debian. But I understand Fedora uses the latest versions of apps? This would seem to make it equivalent to Debian unstable (and just as unsuitable for production servers). I've never actually run Fedora though (and I haven't run Redhat for years), so perhaps there's some flexibility I don't know about.
Part of the puzzle here is that different distributions have a different release schedule. Debian Woody is a stable release, but its nearly two years old. Gentoo x86 2004.1 is a stable release, and its about a month old. Fedora Core 1 is a stable release, and its not very old either. Personally, I run Gentoo ~x86 (the "unstable" branch) on my desktop and I've had some real show-stoppers sneak in. Things like, the shadow package *overwrites* my /etc/pam.d/system-auth with what it considers to be safe defaults, and in the process completely overwrites my changes to do LDAP auth. So, no users can log in. That's not a *huge* problem until you realise that SSH doesn't accept root logins any more. Suddenly, your box is inaccesable. Another bug I had with Gentoo ~x86 was an init script changing for the network interface. It tried to detect link state on the ethernet device before running DHCP - if it couldn't detect link, it would not bring the interface up. It was an attempt to make things quicker for laptop users who didn't like waiting 3 minutes or whatever for DHCP to time out. However, it meant I had no networking on boot up. The problem? A completely broken init script which wouldn't in any way work, but wasn't tested at all and was just committed into CVS and then into the initscripts package. Smart. Easy fixes, once you know about them. Show-stoppers otherwise. I can't speak for Debian Unstable, as I've not run it for ages, but I really don't want *any* chance of something that is even remotely untested going onto a production server. That said, Gentoo x86 and FC1, both stable releases, are more 'up to date' than Debian. Without getting into why this is[1], for a server or critical situation I'd prefer to put my trust in something that's been through the testing process a lot more than the unstable branch of a distribution. I guess, the 'stable' bit refers not so much to how old the application or the package is, but how well it has been tested. That's probably the best way of describing this. Certainly some bugs get through, even in Debian Woody, but you don't have a trivial package update leaving your system unbootable as a general rule. [1] or even if FC1 etc are actually more suitable for servers or critical situations.

I think actually the stable refers more to the lack of change rather than the "stability" of the codebase. The platform is stable. It doesn't change often. And when it does it is just to back port a security fix from a newer version of package XYZ etc... Perfect for long lived systems.
I guess, the 'stable' bit refers not so much to how old the application or the package is, but how well it has been tested. That's probably the best way of describing this. Certainly some bugs get through, even in Debian Woody, but you don't have a trivial package update leaving your system unbootable as a general rule.
Regards -- Oliver Jones » Director » oliver.jones(a)deeperdesign.com » +64 (21) 41 2238 Deeper Design Limited » +64 (7) 377 3328 » www.deeperdesign.com

You can have both. And you can have a fairly fast moving "stable" environment IMO - if you can throw enough testers at it. Any community based OS / distribution will have to rely on community testing, and might take a while to find and fix esoteric bugs.
I think actually the stable refers more to the lack of change rather than the "stability" of the codebase. The platform is stable. It doesn't change often. And when it does it is just to back port a security fix from a newer version of package XYZ etc... Perfect for long lived systems.
/I guess, the 'stable' bit refers not so much to how old the application or the package is, but how well it has been tested. That's probably the best way of describing this. Certainly some bugs get through, even in Debian Woody, but you don't have a trivial package update leaving your system unbootable as a general rule. /

While on the subject of complete rebuilds, the BSD family is up for adoption :)
-----Original Message----- From: Kyle Carter [mailto:kyle(a)feet.net.nz] Sent: Saturday, April 03, 2004 9:38 PM To: wlug(a)list.waikato.ac.nz Subject: Re: [wlug] [Fwd: End of Life for Red Hat Linux 9]
NOOOOOOOOOO
this means another server rebuild... which is never a lot of fun.
id love to use debian if it wasnt so last year (if not even older) maybe this is my chance to find something else.
debian seems to be the shnizzle for keeping a box going through major system upgrades..
i wonder how badly my redhat 9 machine will break doing a fedora upgrade
at least i dont have to setup pppoe again...
-- Kyle Carter
_______________________________________________ wlug mailing list | wlug(a)list.waikato.ac.nz Unsubscribe: http://list.waikato.ac.nz/mailman/listinfo/wlug
participants (17)
-
A. Pagaltzis
-
Craig Box
-
Daniel Lawson
-
Drew Broadley
-
DrWho?
-
Glenn Ramsey
-
Greig McGill
-
James Braid
-
James Spooner
-
Jamie Curtis
-
Jason Le Vaillant
-
Kyle Carter
-
Matthias Dallmeier
-
Oliver Jones
-
Orion Edwards
-
Perry Lorier
-
zcat