Bulk Throughput Measurements - University College London, England

Les Cottrell, created May 31 '02
Bulk Throughput Measurements | Bulk Throughput Simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements


Peter Clarke of the HEP group at University College London set up a host (henceforth referred to as, this is not its real name which is with-held for security reasons). Node1 was an 447MHz PIII with a 100Mbits/s interface to a Cisco 65xx switch and then to the Manchester MAN and SuperJANET (2.5Gbps backbone, I think). It was running Linux 2.4.16 with a standard TCP stack. The windows/buffers were set as follows:
cat /proc/sys/net/core/wmem_max = 8388608
;cat /proc/sys/net/core/rmem_max = 8388608
;cat /proc/sys/net/core/rmem_default = 65535
;cat /proc/sys/net/core/wmem_default = 65535
;cat /proc/sys/net/ipv4/tcp_rmem = 4096 87380 174760
;cat /proc/sys/net/ipv4/tcp_wmem = 4096 16384 131072

At the SLAC end was a 1133 MHz PIII also running Linux 2.4. and with a 3COM GE interface to a Cisco 6509 and then via GE to a 622Mbps ESnet link. The TCP stack at the SLAC end was Web100. The windows/buffers settings were:
;cat /proc/sys/net/core/rmem_max = 8388608
;cat /proc/sys/net/core/rmem_default = 65536
;cat /proc/sys/net/core/wmem_default = 65536
;cat /proc/sys/net/ipv4/tcp_rmem = 4096 87380   4194304
;cat /proc/sys/net/ipv4/tcp_wmem = 4096 65536   4194304

Measurement methodology

We used to make 10 second iperf TCP measurements. For each measurement we use a fixed window and number of parallel streams and measured client end (SLAC) cpu times, iperf throughput, ping responses (loaded) and also recorded various Web100 variables:
               "PktsOut",        "DataBytesOut",
               "PktsRetrans",    "CongestionSignals",
               "SmoothedRTT",    "MinRTT",            "MaxRTT", "CurrentRTO",
               "SACKEnabled",    "NagleEnabled",
               "CurrentRwinSent","MaxRwinSent",       "MinRwinSent",
               "SndLimTimeRwin", "SndLimTimeCwnd",    "SndLimTimeSender");

Following each iperf measurement we ran ping for 10 seconds (unloaded) and recorded the responses. Following the above pair of a 10 second iperf measurement followed by 10 seconds of no iperf throughput, the stream size was changed and the pair repeated. When all selected window sizes had been measured, then a different number of streams was selected and the cycle repeated.


A traceroute from SLAC to UCL is shown below:
3 ( 0.514 ms (ttl=252!) 
4 ( [AS293 - Energy Sciences Network (ESnet)] 0.845 ms (ttl=251!) 
5 ( [AS293 - Energy Sciences Network (ESnet)] 55.7 ms (ttl=250!) 
6 ( [AS293 - Energy Sciences Network (ESnet)] 68.8ms (ttl=249!) 
7 ( 68.8 ms (ttl=249) 
8 ( 149 ms (ttl=248) 
9 ( 149 ms (ttl=247) 
10 ( 149 ms (ttl=246) 
11 ( 149 ms (ttl=245) 
12 ( 149 ms (ttl=244)
13 ( 150 ms (ttl=243) 
14 ( 149 ms (ttl=242) 
15 ( 149 ms (ttl=241)

The throughput topped out at about 90Mbits/s (120 streams, 1024 MB window). 

A plot of the throughputs vs streams and windows is seen below:


Comments to