Bulk Throughput Measurements - TRIUMF

Bulk Throughput Measurements | Bulk Throughput Simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements


On October 6, 2001, measurements were made between and is a 871 MHz PC with a 100 Mbps interface running Linux 2.2 and located at the TRI Universities Meson Factory in Vancouver Canada. It was configured to have large window buffers:
more /proc/sys/net/core/wmem_max = 65535
more /proc/sys/net/core/rmem_max = 65535
more /proc/sys/net/core/rmem_default = 65535
more /proc/sys/net/core/wmem_default = 65535
The window buffer sizes on pharlap are shown below:
ndd /dev/tcp tcp_max_buf = 4194304
ndd  /dev/tcp tcp_cwnd_max = 2097152
ndd /dev/tcp tcp_xmit_hiwat = 16384
ndd /dev/tcp tcp_recv_hiwat = 24576
The ping response from SLAC to TRIUMF was min/avg/max (std)= 75/82/641 (7 msec.) The pipechar from SLAC to TRIUMF was also recorded. Andrew Daviel of TRIUMF reports: there is a bottleneck from UBC to CA*Net described to us by as "The ATM PVC from UBC to BCNET/CA*net3 is only 22Mbps" and "The existing GIGAPOP router at BCNET is not wire speed." The pipechar from TRIUMF SLAC shows this.

The iperf TCP throughput performance (seen below) is disappointing with maxima (the top 10% of the measurements) of about 16Mbits/s.

Created October 6, 2001, last update October 6, 2001. Comments to