Bulk Throughput Measurements - SDSC

Bulk Throughput Measurements | Bulk Throughput Simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements


SLAC to SDSC bulk thruput The measurements were made between and  at the San Diego Supercomputer Center, San Diego, California, USA. Pharlap is a Sun E4500 with 6*336MHz cpus and maximum buffer size of 8Mbytes running Solaris 5.8. Torah is a Sun E10K with 8*333 MHz running Solaris 5.7, with a GE interface to the campus network.. The pipechar from SLAC to torah indicates that the route is via ESnet to Sunnyvale where it crosses to CalREN2/UCAID. The bottleneck appears to be the last hop to torah and is 130Mbits/s. SLAC had an OC3 interface to ESnet at this time. The CalREN2 connection to UCSD was at OC12.

Measurements made on September 27 '01, from pharlap to torah are shown to the right. They indicate that the throughput maxima are around 119Mbps (the top 10% throughputs measured are above 118.8Mbps). It is apparent that with a single stream with the 1MByte window one can get close to the maxima of throughputs. The unloaded RTT from SLAC to SDSC was min/avg/max (std)=27.7/29.4/581 (2) msec. This corresponds to a window size of about 580KBytes for a bottleneck of 1118Mbits/s. We repeated the measurements from pharlap to a Sun 440 MHz Ultra 10 with a both a 100Mbps and a GE interface running Solaris 5.8. the maximum throughput we could achieve was about 10Mbps. The pipechar supported this conclusion.


On October 1, 2001 we made measurements of iperf TCP throughput from to Multivac was a Sun E450 with 4*400MHz cpus running Solaris 5.8. I had a GE interface and the connection from SDSC to the Internet was at OC12 (622Mbps). The ping RTTs were min/avg/max = 26/28/108 msec. The throughput results are shown below. the maxima (i.e. throughput for the top 10% of the measurements) were over 457Mbits/s.

Summary of Bulk Throughput Measurements

Created September 27, 2001, last update September 27, 2001.
Comments to