Bulk Thruput Measurements-Cern

Bulk Throughput Measurements | Bulk Throughput Simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements


SLAC to CERN bulk thruput The traceroute from SLAC to CERN indicates that the route is via ESnet interchanging to CERN at STARTAP in Chicago. The pipechar indicates that the bottleneck is at the host with a 100Mbps interface. Below are shown throughput versus windows and streams for September 2000 and July 2001. For July 2001, the host at SLAC was a Sun 4500 with 6 cpus running at 336MHz and a Gbps Ethernet interface running Solaris 2.8. At CERN the host was also a Sun running Solaris 5.6 with a 100 Mbps Ethernet interface.

Measurements made on July 11, 2001, are shown to the right. They indicate that the throughput maxima are around 70-82 Mbits/s. It is also apparent that the optimum throughputs are not achieved with a single stream, even for a large window of 4096KBytes. iperf throughput SLAC to CERN Jul 2001 The numbers on the peaks indicate the number of streams. In the table below we indicate the for those numbers of streams the window sizes and product of window * streams.
Window (KB)StreamsWindow * Streams (MBytes)
The products of window * streams are close to ~ 1.4MBytes = the product of RTT (159 msec) * 85Mbits/s, where 85Mbits/s is roughly the bottleneck bandwidth measured by pipechar on this path (at this time, only had a fast Ethernet 100Mbps interface).


We measured iperf TCP throughput from to on August 7, 2001. The maxima indicate a throughput of about 56Mbps. This is less than observed from SLAC to CERN (about 75Mbits/s). The large throughputs (> 10Mbps) obtained with 1 stream and large windows (>= 256KBytes) are interesting, but not understood. A similar phenomenon also occurs for IN2P3 to SLAC on July 25, 2001 (see below). We also made iperf measurements from CERN to SLAC with 1 stream and 2024KByte and 4096KByte windows and obtained throughputs of 46.7Mbits/s and 46.9Mbits/s respectively.






On March 27th, we noticed that total throughput from SLAC to CERN dropped from about 340Mbits/s to 240Mbits/s. On first thought, we assumed that the change was due to network chages between Cern and Chicago which was not the case. The reason behind the drop is still being investigated. In the meantime, on April 3 we decided to optimize our window size and number of streams used for tests to pcgiga to see if we can regain our original throughput and maybe gain larger throughput values. Before tuning, we were using window sizes of 32768k and using only one stream yeilding ~340Mbits/s at it's peak. Here, we were able to maximize our throughput at 483.9Mb/s. The goal here is to get the largest throughput without saturating the link(+80% of the max Mb/s). A window size of 2048k and 8 windows was chosen as optimal values for Iperf transfers.





Sample Bulk Throughput Measurements

Cal Tech | CERN | Colorado | IN2P3 | INFN |Daresbury | Stanford to Daresbury | ANL |BNL | LBL |SLAC To Stanford's Campus


Summary of Bulk Throughput Measurements

Created August 25, 2000, last update August 29, 2001.
Comments to