Re: [wlug] "Too many open files" error

Roger wrote:
I am running an application server on Red Hat 8.0 and every so often
John R. McPherson wrote: the
machine generates an error message "Too many files open" and then locks up. I have to restart the machine from the console to rectify the situation. Can anyone please tell me if there is some configuration
setting that I can change to increase the number of permitted files? Is there some command that allows me to monitor the number of open files in the system?
The other reply was for kernel version 2.6-something, but as far as I
know, under kernel 2.4.x and earlier, there is a hard-coded limit in the kernel - /usr/include/linux/limits.h has #define OPEN_MAX 256 /* # open files a process may have */ (but that is per-process, not in total).
You could install and try the lsof(8) command - I haven't used it much, but it stands for "LiSt Open Files", so it sounds like what you
want :p
er, no.
Under 2.4 a process can have a stupidly large number of fd's open, you need to modify /proc/sys/fs/file-max and perhaps /proc/sys/fs/inode-max
which is the system wide limit, to raise the limit per process you can use ulimit to raise it above the 256 (default limit) you mentioned above.
Running out of fd's is a pretty nasty business, what on earth are you doing to that poor machine? Are you running lots of programs? (like, hundreds of copies of something?)
That is part of the problem, I am not precisely sure why there are so many files opened. I am running a fairly large J2EE application on JBoss and it seems to load heaps of files into the JVM. I may need to change some settings in JBoss, but that of course is the subject for another forum. Thanks for all the help. Roger

Roger wrote:
John R. McPherson wrote:
Roger wrote:
I am running an application server on Red Hat 8.0 and every so often
the
machine generates an error message "Too many files open" and then
locks
up. I have to restart the machine from the console to rectify the situation. Can anyone please tell me if there is some configuration
setting that I can change to increase the number of permitted
files? Is
there some command that allows me to monitor the number of open
files in
the system?
The other reply was for kernel version 2.6-something, but as far as I
know, under kernel 2.4.x and earlier, there is a hard-coded limit in the kernel - /usr/include/linux/limits.h has #define OPEN_MAX 256 /* # open files a process may have
*/
(but that is per-process, not in total).
You could install and try the lsof(8) command - I haven't used it much, but it stands for "LiSt Open Files", so it sounds like what you
want :p
er, no.
Under 2.4 a process can have a stupidly large number of fd's open, you need to modify /proc/sys/fs/file-max and perhaps /proc/sys/fs/inode-max
which is the system wide limit, to raise the limit per process you can use ulimit to raise it above the 256 (default limit) you mentioned
above.
Running out of fd's is a pretty nasty business, what on earth are you doing to that poor machine? Are you running lots of programs? (like, hundreds of copies of something?)
That is part of the problem, I am not precisely sure why there are so many files opened. I am running a fairly large J2EE application on JBoss and it seems to load heaps of files into the JVM. I may need to change some settings in JBoss, but that of course is the subject for another forum.
Thanks for all the help. Roger
lsof will show you all files that all programs have open. Kinda a long list. ps ax | wc -l will give you an estimate of how many processes you have running. (as will "top") ls /proc/*/fd/* 2>/dev/null | cut -d / -f 3 | uniq -c | sort -n | tail that will give you a top 10 listing of processes using the most file descriptors. :)
participants (2)
-
Perry Lorier
-
Roger