"Too many open files" error

I am running an application server on Red Hat 8.0 and every so often the machine generates an error message "Too many files open" and then locks up. I have to restart the machine from the console to rectify the situation. Can anyone please tell me if there is some configuration setting that I can change to increase the number of permitted files? Is there some command that allows me to monitor the number of open files in the system? Thanks in anticipation Roger

On Tue, 14 Oct 2003, Roger wrote:
I am running an application server on Red Hat 8.0 and every so often the machine generates an error message "Too many files open" and then locks up. I have to restart the machine from the console to rectify the situation. Can anyone please tell me if there is some configuration setting that I can change to increase the number of permitted files? Is there some command that allows me to monitor the number of open files in the system?
On my system (kernel 2.6.0-testsomething), there's a file /proc/sys/fs/file-max that defines the maximum number of file handles that can be open on a system at any one time. On my system, it's set to 52420; to change it echo 131072 > /proc/sys/fs/file-max (as root, of course). Congratulate yourself on not running Solaris; that requires a reboot to change. Regards Richard -- Richard Stevenson

Roger wrote:
I am running an application server on Red Hat 8.0 and every so often the machine generates an error message "Too many files open" and then locks up. I have to restart the machine from the console to rectify the situation. Can anyone please tell me if there is some configuration setting that I can change to increase the number of permitted files? Is there some command that allows me to monitor the number of open files in the system?
The other reply was for kernel version 2.6-something, but as far as I know, under kernel 2.4.x and earlier, there is a hard-coded limit in the kernel - /usr/include/linux/limits.h has #define OPEN_MAX 256 /* # open files a process may have */ (but that is per-process, not in total). You could install and try the lsof(8) command - I haven't used it much, but it stands for "LiSt Open Files", so it sounds like what you want :p John McPherson

John R. McPherson wrote:
Roger wrote:
I am running an application server on Red Hat 8.0 and every so often the machine generates an error message "Too many files open" and then locks up. I have to restart the machine from the console to rectify the situation. Can anyone please tell me if there is some configuration setting that I can change to increase the number of permitted files? Is there some command that allows me to monitor the number of open files in the system?
The other reply was for kernel version 2.6-something, but as far as I know, under kernel 2.4.x and earlier, there is a hard-coded limit in the kernel - /usr/include/linux/limits.h has #define OPEN_MAX 256 /* # open files a process may have */ (but that is per-process, not in total).
You could install and try the lsof(8) command - I haven't used it much, but it stands for "LiSt Open Files", so it sounds like what you want :p
er, no. Under 2.4 a process can have a stupidly large number of fd's open, you need to modify /proc/sys/fs/file-max and perhaps /proc/sys/fs/inode-max which is the system wide limit, to raise the limit per process you can use ulimit to raise it above the 256 (default limit) you mentioned above. Running out of fd's is a pretty nasty business, what on earth are you doing to that poor machine? Are you running lots of programs? (like, hundreds of copies of something?)
participants (4)
-
John R. McPherson
-
Perry Lorier
-
Richard Stevenson
-
Roger