I'd recommend against options IPFIREWALL_DEFAULT_TO_ACCEPT
. The default is to Default to Deny. The firewall comes up with just one rule deny ip from any to any
and stays that way until a script configures exactly what traffic should get through.
Follow-Up Note: RSA (one of the world's leading security technology companies) was hacked recently when part of their firewall was disabled during a maintenance window. This really underscores how quickly a system can be compromised given the right conditions.
If you insist on disabling the firewall until you explicitly block unwanted traffic, please consider using the sysctl available by adding net.inet.ip.fw.default_to_accept=1
to loader.conf
. This has the added benefit of being easily modified (no recompiling the kernel) if you change your mind at some point in the future.
I never ever encountered this issue. However, you should probably increase your hash table width in order to reduce its depth. Using "dmesg", you'll see how many entries you currently have:
$ dmesg | grep '^IP route'
IP route cache hash table entries: 32768 (order: 5, 131072 bytes)
You can change this value with the kernel boot command line parameter rhash_entries
. First try it by hand then add it to your lilo.conf
or grub.conf
.
For example: kernel vmlinux rhash_entries=131072
It is possible that you have a very limited hash table because you have assigned little memory to your HAProxy VM (the route hash size is adjusted depending on total RAM).
Concerning tcp_mem
, be careful. Your initial settings make me think you were running with 1 GB of RAM, 1/3 of which could be allocated to TCP sockets. Now you've allocated 367872 * 4096 bytes = 1.5 GB of RAM to TCP sockets. You should be very careful not to run out of memory. A rule of thumb is to allocate 1/3 of the memory to HAProxy and another 1/3 to the TCP stack and the last 1/3 to the rest of the system.
I suspect that your "out of socket memory" message comes from default settings in tcp_rmem
and tcp_wmem
. By default you have 64 kB allocated on output for each socket and 87 kB on input. This means a total of 300 kB for a proxied connection, just for socket buffers. Add to that 16 or 32 kB for HAProxy, and you see that with 1 GB of RAM you'll only support 3000 connections.
By changing the default settings of tcp_rmem
and tcp_wmem
(middle param), you can get a lot lower on memory. I get good results with values as low as 4096 for the write buffer, and 7300 or 16060 in tcp_rmem
(5 or 11 TCP segments). You can change those settings without restarting, however they will only apply to new connections.
If you prefer not to touch your sysctls too much, the latest HAProxy, 1.4-dev8, allows you to tweak those parameters from the global configuration, and per side (client or server).
I am hoping this helps!
Best Answer
Take a look at the dtrace toolkit for starters, also "netstat -na" output...do you see a lot of connections in TIME WAIT...
In the dtrace toolkit (google it), the "connections" script may be of particular interest to you. This is assuming you are using Solaris 10 of course...
600 concurrent TCP connections is not a particularly large number.
If you do need tuning you'll be using ndd to set kernel parameters. See: Internet Protocol Suite Tunable Parameters