Network Throughput – Measuring with Netcat vs. CIFS/SMB Transfer Rates

measurementnetcatnetworkingserver-message-block

I have been attempting to measure and benchmark our LAN's throughput as part of a larger project. Our LAN is constructed using cat5e and HP ProCurve 1800-24G switches which support 10/100/1000 Mbps auto-sensing. The physical topology is rather simple. We have a ProCurve in our server rack that all the servers are connected to (I refer to it incorrectly as the "backbone" switch). Each of of our three switches that all of the client machines are connected to are then connected to the "backbone" switch using a separate cable/port for each switch. It's basically a "hub" design. The workstation I am testing from is two switches away from the "backbone" switch and has an old IDE drive in it. I used HDTune to measure my drive speed at approximatively 60 MB/s. Our servers are a HP DL380 G5 with a RAID6 array of 72GB single port 15K SAS drives and two Intel Xeon Duo Core CPUs at 3.0Ghz.

I have read a few of the other (here, and here) questions about this topic as well as the Tom's Hardware article so I am aware that my actual throughput will fall far short of the theoretical maximum bandwidth of a 1Gbit network (e.g., a 124 MB/s transfer rate). However, the real thing that is puzzling me is the discrepancies between numbers I am getting using netcat vs. timing a file transfer using CIFS/SMB.

I am using the cygwin version of netcat like so:

On the "server":

nc -vv -l -p 1234 > /dev/null

On the "client":

time yes|nc -vv -n 192.168.1.10 1234

For testing the file transfer rate using CIFS/SMB, I just do something like this using cygwin:

time cp local_file.iso /remote_dir/

Now unless I have done my math (divide bytes transfered by seconds, to get bytes per second and convert from there) completely wrong transferring with netcat is really really slow. Like 4-8 MB/s slow. On the other hand, using a 120MB .iso as a transfer file I calculate out the throughput to the CIFS/SMB server at around 30-35 MB/s. Still way slower that I would expect but a completely different number than what I get using netcat.

I should mention that I'm actually using two different servers to do the netcat vs. CIFS/SMB testing. I'm using a XenServer host for netcat, and a Windows 2003 Server for CIFS (I don't have the option to install netcat on Windows server). They are identical in terms of hardware. Now I know this might be a bit of an apple to oranges comparison but if I do a CIFS transfer to the XenServer host I get transfer rates right around 30 MB/s which seem to concur with the result I get to the Windows 2003 Server.

I guess I really have two questions: 1) Why the different numbers between netcat and timing the CIFS/SMB share file transfers? and 2) Why is my actual throughput so low? I know my disks can only push so much data to the NIC so fast, but surely I should see something around 60 MB/s?

Best Answer

see how fast is a read on the xen - use something like hdparam or just do a local copy, that will give an idea about disk performance

to transfer files try A different tool like dd is strange why nc is slow - is very fast for me for example