Bulk Throughput Measurements - PNW

Bulk Throughput Measurements | Bulk Throughput Simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements


On October 21, 2001, measurements were made between and ( located at the Pacfic North West Internet 2 GigaPoP  at the University of Washington. Pharlap was a Sun E4500 with 6*336MHz cpus and a GE interface running Solaris 5.8. SLAC had a 1Gbps link to the Stanford campus and from there a 622Mbps link to CalREN and Internet 2. Gigatcp1 was a 1000MHz PC cpu running FreeBSD 4.3 with a GE interface and connected to Internet 2 at 2.5Gbps. The route was set up in both directions to use Internet 2. The ping response from SLAC to SoX was min/avg/max = 55/58/1830 median=55 msec. The pipechar from SLAC to PNW was also recorded. The window buffer sizes on pharlap are shown below:
ndd /dev/tcp tcp_max_buf = 4194304
;ndd /dev/tcp tcp_cwnd_max = 2097152
;ndd /dev/tcp tcp_xmit_hiwat = 16384
;ndd /dev/tcp tcp_recv_hiwat = 24576

The window buffer sizes on gigatcp1 were as shown below:
sysctl kern.ipc.maxsockbuf = kern.ipc.maxsockbuf: 20480000
;sysctl net.inet.tcp.rfc1323 = net.inet.tcp.rfc1323: 1
;sysctl net.inet.tcp.sendspace = net.inet.tcp.sendspace: 1024000
;sysctl net.inet.tcp.recvspace = net.inet.tcp.recvspace: 1024000

Unfortunately I was unable to install iperf on gigatcp2 with multi thread support (the host appears to lack pthreads support), so we were unable to make tests with multiple streams.

The performances (seen below) are symmetric with maxima (10% of the masurements are above this value) of > 228Mbps. It can be seen that the improvement in throughput is roughly lines from 8KBytes up through 512KBytes. the lines through the points for window sizes 8KBytes through 512KBytes are fits to straight line passing through the origin. The parameters of the fits are shown together with the R2 of the fits.

Created October 21, 2001, last update October 21, 2001.
Comments to