IEPM

Bulk Throughput Measurements - APAN-JP

Les Cottrell, created July 7 '02
Bulk Throughput Measurements | Bulk Throughput Simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements

Configurations

Ayumu KUBOTA of the Network Engineering Group KDDI R&D Laboratories, Inc provided access to a host directly connected to the APAN Tokyo XP via Gigabit Ethernet. Because many research institutes in Japan are using APAN for international research activities, the results of the throughput measurements between SLAC and the APAN Tokyo XP were be good reference data for them.

Remote Host Configuration

> Here is spec. of our Linux box. We will be happy to create an account
> for the measurement task when requested.
>
> OS: Linux-2.4.18 (RedHat 7.3)
> CPU: Pentium-III 1GHz
> RAM: 1GB
> NIC: 10/100 Ethernet
>        (203.181.248.51)
>      Gigabit Ethernet (Intel PRO/1000F 64bit/66MHz PCI)
>        (203.181.248.186)
>      As default route go through eth0, we have to add static route for
>      the monitoring host in order to use Gigabit Ethernet.
The window/buffer sizes were set as follows:
cat /proc/sys/net/core/wmem_max = 8388608
;cat /proc/sys/net/core/rmem_max = 8388608
;cat /proc/sys/net/core/rmem_default = 65536
;cat /proc/sys/net/core/wmem_default = 65536
;cat /proc/sys/net/ipv4/tcp_rmem = 4096 87380   174760
;cat /proc/sys/net/ipv4/tcp_wmem = 4096 16384   131072

SLAC Monitoring host configuration

At the SLAC end was a 1133 MHz PIII also running Linux 2.4. and with a 3COM GE interface to a Cisco 6509 and then via GE to a 622Mbps ESnet link. The TCP stack at the SLAC end was Web100. The windows/buffers settings were:[6~
;cat /proc/sys/net/core/rmem_max = 8388608
;cat /proc/sys/net/core/rmem_default = 65536
;cat /proc/sys/net/core/wmem_default = 65536
;cat /proc/sys/net/ipv4/tcp_rmem = 4096 87380   4194304
;cat /proc/sys/net/ipv4/tcp_wmem = 4096 65536   4194304

Traceroutes

The ping RTT was about 187 msec. The traceroute from SLAC to this host appears as below:
traceroute to perf3-ge.jp.apan.net (203.181.248.186): 1-30 hops, 38 byte packets
 3  I2-GATEWAY.Stanford.EDU (192.68.191.83)  0.264 ms (ttl=253)
 4  STAN.POS.calren2.NET (171.64.1.213)  0.331 ms (ttl=252)
 5  SUNV--STAN.POS.calren2.net (198.32.249.73)  0.770 ms (ttl=251)
 6  Abilene--QSV.POS.calren2.net (198.32.249.162)  0.901 ms (ttl=250)
 7  sttl-snva.abilene.ucaid.edu (198.32.8.9)  18.8 ms (ttl=249)
 8  TRANSPAC-PWAVE.pnw-gigapop.net (198.32.170.46)  19.0 ms (ttl=247!)
 9  192.203.116.34 (192.203.116.34)  137 ms (ttl=246!)
10  perf3-ge.jp.apan.net (203.181.248.186)  135 ms (ttl=246)
The traceroute from node1.jp.apan.jp appeared as:
traceroute to WWW4.slac.stanford.edu (134.79.18.136), 30 hops max, 38 byte packets
 1  tpr3-ge1-2-0-29 (203.181.248.185)  0.281 ms  0.242 ms  0.239 ms
 2  192.203.116.33 (192.203.116.33)  116.127 ms  116.125 ms  116.118 ms
 3  Abilene-PWAVE.pnw-gigapop.net (198.32.170.43)  116.033 ms  116.050 ms  116.034 ms
 4  snva-sttl.abilene.ucaid.edu (198.32.8.10)  134.034 ms  134.019 ms  134.005 ms
 5  198.32.249.161 (198.32.249.161)  134.179 ms  134.118 ms  134.099 ms
 6  STAN--SUNV.POS.calren2.net (198.32.249.74)  134.393 ms  134.386 ms  134.414 ms
 7  i2-gateway.Stanford.EDU (171.64.1.214)  134.566 ms  134.471 ms  134.488 ms

Measurement methodology

We used tcpload.pl to make 10 second iperf TCP measurements. For each measurement we use a fixed window and number of parallel streams and measured client end (SLAC) cpu times, iperf throughput, ping responses (loaded) and also recorded various Web100 variables:
 @vars=("StartTime",
               "PktsOut",        "DataBytesOut",
               "PktsRetrans",    "CongestionSignals",
               "SmoothedRTT",    "MinRTT",            "MaxRTT", "CurrentRTO",
               "SACKEnabled",    "NagleEnabled",
               "CurrentRwinSent","MaxRwinSent",       "MinRwinSent",
               "SndLimTimeRwin", "SndLimTimeCwnd",    "SndLimTimeSender");

Following each iperf measurement we ran ping for 10 seconds (unloaded) and recorded the responses. Following the above pair of a 10 second iperf measurement followed by 10 seconds of no iperf throughput, the stream size was changed and the pair repeated. When all selected window sizes had been measured, then a different number of streams was selected and the cycle repeated.

Results

The throughput topped out at 440Mbits/s (90 streams, 1024 MB window) see the plot below. I estimate (using http://www.indo.com/distance/) the distance from San Francisco to Tokyo to be about 8376km. This gives a bandwidth distance product of 3641440 Mbits-km, (for throughput 440Mbits/s). This is about 70% of the multi-stream Internet2 Land Speed Record. (http://www.internet2.edu/html/i2lsr.shtml)



Comments to iepm-l@slac.stanford.edu