You can get a pretty good measure of this using the iostat
tool.
% iostat -dx /dev/sda 5
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.78 11.03 1.19 2.82 72.98 111.07 45.80 0.13 32.78 1.60 0.64
The disk utilisation is listed in the last column. This is defined as
Percentage of CPU time during which I/O requests were issued to the device
(band-width utilization for the device). Device saturation
occurs when this value is close to 100%.
Short answer: you can't. Ports below 1024 can be opened only by root. As per comment - well, you can, using CAP_NET_BIND_SERVICE, but that approach, applied to java bin will make any java program to be run with this setting, which is undesirable, if not a security risk.
The long answer: you can redirect connections on port 80 to some other port you can open as normal user.
Run as root:
# iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
As loopback devices (like localhost) do not use the prerouting rules, if you need to use localhost, etc., add this rule as well (thanks @Francesco):
# iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080
NOTE: The above solution is not well suited for multi-user systems, as any user can open port 8080 (or any other high port you decide to use), thus intercepting the traffic. (Credits to CesarB).
EDIT: as per comment question - to delete the above rule:
# iptables -t nat --line-numbers -n -L
This will output something like:
Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
1 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 redir ports 8088
2 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8080
The rule you are interested in is nr. 2, so to delete it:
# iptables -t nat -D PREROUTING 2
Best Answer
After lots of benchmarking with sysbench, I come to this conclusion:
To survive (performance-wise) a situation where
just dump all elevators, queues and dirty page caches. The correct place for dirty pages is in the RAM of that hardware write-cache.
Adjust dirty_ratio (or new dirty_bytes) as low as possible, but keep an eye on sequential throughput. In my particular case, 15 MB were optimum (
echo 15000000 > dirty_bytes
).This is more a hack than a solution because gigabytes of RAM are now used for read caching only instead of dirty cache. For dirty cache to work out well in this situation, the Linux kernel background flusher would need to average at what speed the underlying device accepts requests and adjust background flushing accordingly. Not easy.
Specifications and benchmarks for comparison:
Tested while
dd
'ing zeros to disk, sysbench showed huge success, boosting 10 threads fsync writes at 16 kB from 33 to 700 IOPS (idle limit: 1500 IOPS) and single thread from 8 to 400 IOPS.Without load, IOPS were unaffected (~1500) and throughput slightly reduced (from 251 MB/s to 216 MB/s).
dd
call:for sysbench, the test_file.0 was prepared to be unsparse with:
sysbench call for 10 threads:
sysbench call for one thread:
Smaller block sizes showed even more drastic numbers.
--file-block-size=4096 with 1 GB dirty_bytes:
--file-block-size=4096 with 15 MB dirty_bytes:
--file-block-size=4096 with 15 MB dirty_bytes on idle system:
sysbench 0.4.12: multi-threaded system evaluation benchmark
Test-System:
In summary, I am now sure this configuration will perform well in idle, high load and even full load situations for database traffic that otherwise would have been starved by sequential traffic. Sequential throughput is higher than two gigabit links can deliver anyway, so no problem reducing it a bit.