Re: [wlug] "Too many open files" error

Thanks for the helpful suggestions. Using the commands you gave me highlight that it is JBoss and the JVM that appear to be using up the resources. However, the "Too many open files" error has now transformed itself into a JVM crashing problem. I have had this problem off and on for a number of months, but it has now reached the point where I can't work around it. I am fairly convinced that it is a resource limitation problem somewhere, but I'm no expert in Linux and am running out of time. The server is on a small network, so I was wondering someone would be prepared to come around and look at it, for an appropriate fee, to see if there was any obvious (to them) problem. Cheers Roger

Roger wrote:
Thanks for the helpful suggestions. Using the commands you gave me highlight that it is JBoss and the JVM that appear to be using up the resources. However, the "Too many open files" error has now transformed itself into a JVM crashing problem. I have had this problem off and on for a number of months, but it has now reached the point where I can't work around it. I am fairly convinced that it is a resource limitation problem somewhere, but I'm no expert in Linux and am running out of time. The server is on a small network, so I was wondering someone would be prepared to come around and look at it, for an appropriate fee, to see if there was any obvious (to them) problem.
have you tried (as root) echo 65536 > /proc/sys/fs/file-max that will raise the system number of open files, the default IIRC is about 1,000. This will be reset back to it's default on reboot. This can be configured to take effect on every reboot in most distros by adding the line fs.filemax=65536 to /etc/sysctl.conf and then running /sbin/sysctl -p to reread this file. The /proc/sys file system has got configuration variables in it that can be used to change the kernel settings on the fly. Lots of fun fun fun stuff to tinker with in there ;) If it's only the JVM that's crashing now, have a look at: ulimit -a this will show you how many file descriptors each process is limited to (1024 by default IIRC?), this can be changed with ulimit -n 32000 ulimit's are only valid for the shell they are started in, and all shells created from that one. ie, opening an xterm and running a program, and opening an xterm and changing the ulimit will not fix the ulimit on the program you just started. It should take significant effort to manage to use 32,000 file handles in one process, and 65,000 system wide :) If you're still having trouble, perhaps try asking on IRC, there are several knowledgable people around that may be able to help. (Don't be put off with the banter that goes on :) We hang out on Undernet in #wlug
participants (2)
-
Perry Lorier
-
Roger