I'm having problems with a (usually) latency bound desktop java application connecting to a custom database server.
While it's working on a remote host (windows XP) it's fast (big form opens in under 2 seconds). When it's running on same host the database is on (using X11vnc and NX) it is very slow (the same form opens in aroung 20 seconds). Server is running SuSE Linux Enterprise Server 10.
What I checked:
iptables
are clean (no rules infilter
,raw
,mangle
ornat
, all tables on ACCEPT)- routing is normal (just default route and local network)
- brtables are not even installed
tc
is clean- ping latency to localhost is aroung 0.007ms for both 64 byte and 1500 byte packets, latency to remote host is around 0.8ms
- loopback throughput is aroung 500MiB/s (tested with
netcat
) - different java VMs (both 1.5 and 1.6)
While looking in atop
there doesn't seem to be any bottlenecks:
- CPU utilisation of the java and database processes is very low (java: ~20%, database <5%), the CPU utilisation is moderate on remote access (around 30% for the database) server is a Quad Core 2.66Ghz, client is a Core 2 Duo 2.33Ghz, system besides that is idle
- there are hardly any disk reads/writes during the long query (in total about 5-10 reads)
The only thing that differs between the remote and local run is network utilisation, while the local process pulls data at about 1200kbps, the remote is doing it at about 15Mbps.
I'm currently working on duplicating the problem with my hardware, so any tips on those lines are welcome too.
EDIT: Changing the lo
interface MTU from the default 16k to 1500 fixes the issue.
The issue had been duplicated on Debain lenny 64bit.
Best Answer
All I've got for you are more debugging ideas in no particular order...
tcpdump -i lo
, it might help figure out whether there's an obvious pattern to the packets being transmitted.echo 1 > /proc/sys/net/ipv4/tcp_low_latency