Short answer: you can't. Ports below 1024 can be opened only by root. As per comment - well, you can, using CAP_NET_BIND_SERVICE, but that approach, applied to java bin will make any java program to be run with this setting, which is undesirable, if not a security risk.
The long answer: you can redirect connections on port 80 to some other port you can open as normal user.
Run as root:
# iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
As loopback devices (like localhost) do not use the prerouting rules, if you need to use localhost, etc., add this rule as well (thanks @Francesco):
# iptables -t nat -I OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080
NOTE: The above solution is not well suited for multi-user systems, as any user can open port 8080 (or any other high port you decide to use), thus intercepting the traffic. (Credits to CesarB).
EDIT: as per comment question - to delete the above rule:
# iptables -t nat --line-numbers -n -L
This will output something like:
Chain PREROUTING (policy ACCEPT)
num target prot opt source destination
1 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 redir ports 8088
2 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8080
The rule you are interested in is nr. 2, so to delete it:
# iptables -t nat -D PREROUTING 2
I've done a bit of benchmarking with write operations in a loopback device. Here's the conclusion:
- If you sync after every write, then a loopback device performs significantly worse (almost twice as slow).
- If you allow the disk cache an IO scheduler to do their job, then there is hardly any difference between using a loopback device and direct disk access.
Benchmark results
First, I ran a benchmark on a loopback device in tmpfs of 8GB, and a loopback device within that loopback device (with sync after every write operation):
ext4 in tmpfs:
Measured speed: 557, 567, 563, 558, 560, 559, 556, 556, 554, 557
Average speed : 558.7 MB/s (min 554 max 560)
ext4 in extf in tmpfs:
Measured speed: 296, 298, 295, 295, 299, 297, 294, 295, 296, 296
Average speed : 296.1 MB/s (min 294 max 299)
Clearly, there is some difference in performance when using loopback devices with sync-on-write.
Then I repeated the same test on my HDD.
ext4 (HDD, 1000 MB, 3 times):
Measured speed: 24.1, 23.6, 23.0
Average speed : 23.5 MB/s (min 23.0 max 24.1)
ext4 in ext4 (HDD, 945MB):
Measured speed: 12.9, 13.0, 12.7
Average speed : 12.8 MB/s (min 12.7 max 13.0)
Same benchmark on HDD, now without syncing after every write (time (dd if=/dev/zero bs=1M count=1000 of=file; sync)
, measured as <size>
/<time in seconds>
).
ext4 (HDD, 1000 MB):
Measured speed: 84.3, 86.1, 83.9, 86.1, 87.7
Average speed : 85.6 MB/s (min 84.3 max 87.7)
ext4 in ext4 (HDD, 945MB):
Measured speed: 89.9, 97.2, 82.9, 84.0, 82.7
Average speed : 87.3 MB/s (min 82.7 max 97.2)
(surprisingly, the loopback benchmark looks better than the raw-disk benchmark, presumably because of the smaller size of the loopback device, thus less time is spent on the actual sync-to-disk)
Benchmark setup
First, I created a loopback filesystem of 8G in my /tmp (tmpfs):
truncate /tmp/file -s 8G
mkfs.ext4 /tmp/file
sudo mount /tmp/file /mnt/
sudo chown $USER /mnt/
Then I established a baseline by filling the mounted loopback file with data:
$ dd if=/dev/zero bs=1M of=/mnt/bigfile oflag=sync
dd: error writing '/mnt/bigfile': No space left on device
7492+0 records in
7491+0 records out
7855763456 bytes (7.9 GB) copied, 14.0959 s, 557 MB/s
After doing that, I created another loopback device in the previous loopback device:
mkdir /tmp/mountpoint
mkfs.ext4 /mnt/bigfile
sudo mount /mnt/bigfile /tmp/mountpoint
sudo chown $USER /tmp/mountpoint
And ran the benchmark again, ten times:
$ dd if=/dev/zero bs=1M of=/tmp/mountpoint/file oflag=sync
...
7171379200 bytes (7.2 GB) copied, 27.0111 s, 265 MB/s
and then I unmounted the test file and removed it:
sudo umount /tmp/mountpoint
sudo umount /mnt
(similarly for the test on the HDD, except I also added count=1000
to prevent the test from filling my whole disk)
(and for not-writing-on-sync test, I ran timed the dd
and sync
operation)
Best Answer
Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.
You have to remember that data between different mount points is unshareable too.
While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from
slabtop
for a system running 3 different file systems (XFS, ext4, btrfs):As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.