IEPM

TCP Stacks on production links,
Scalable and HS TCP single streams vs
one Reno TCP single stream

SLAC Home Page
 Index of TCP Stacks on production links | Summary of single stream tests | a Scalable TCP single stream versus Reno TCP multiple streams | a Fast TCP single stream versus a Reno TCP single stream | a Fast TCP single stream versus Reno TCP multiple streams | Fast and Reno TCP varying the window size | Bulk throughput measurements | Installation Notes
Fabrizio Coccetti and Les Cottrell. Created 17 March '03, last update 2 April '03

 


Overview | High Speed TCP and Reno TCP from SLAC to CERN | High Speed TCP and Reno TCP from SLAC to NIKHEF | High Speed TCP and Reno TCP from SLAC to APAN | High Speed TCP and Reno TCP from SLAC to Caltech | Scalable TCP and Reno TCP from SLAC to CERN | Scalable TCP and Reno TCP from SLAC to NIKHEF | Scalable TCP and Reno TCP from SLAC to APAN | High Speed TCP and Reno TCP from SLAC to Caltech | Two Reno TCP streams from SLAC to CERN | Two Reno TCP streams from SLAC to APAN

Fast TCP single stream versus Reno TCP single stream from SLAC to several destination | Fast TCP single stream versus Reno TCP multiple streams from SLAC to CERN
 

Overview

In this page we run tests using two single streams from two different machines each using a different TCP stack to the same destination machine.
All measures have a standard MTU (1500B) and txqueuelen (100packets), the TCP window was chosen appropriately big (8 MB).

An Iperf single stream is started from S1 to the Remote Host.
At the same time, another Iperf single stream is started from S2 the Remote Host.
We display the values of the throughput of the two streams every 5 seconds.

To read about how  an Iperf  single stream using a new TCP stack  performs versus multiple Reno streams, you can follow this link.

We tested the following stacks:

This is the page that reports a summary of High Speed TCP, Fast TCP, Scalable TCP vs Reno single stream.

High Speed TCP and Reno TCP from SLAC to CERN

In this case we are using a window size of 8 MB.  It is visible how High Speed TCP recovers much faster that Reno TCP that has a linear increase.
The green dotted line shows the RTT between SLAC and CERN during the measurement. The RTT reaches a peak just before both Reno and High Speed TCP drop.

Other three graphics, showing the same behavior as above, measures taken at different time of the day.

 

High Speed TCP and Reno TCP from SLAC to NIKHEF

In this case we are using a window size of 8 MB.  It is visible how High Speed TCP recovers much faster that Reno TCP that has a linear increase.
The green dotted line shows the RTT between SLAC and CERN during the measurement. The RTT reaches a peak just before both Reno and High Speed TCP drop.

 

High Speed TCP and Reno TCP from SLAC to APAN

Window size = 8 MB

A magnification of the first 80 seconds of the previous graph. After 30 seconds a congestion occurs, the RTT increases, and Reno TCP drops. High Speed TCP recovers immediately.

 

High Speed TCP and Reno TCP from SLAC to Caltech

Window size = 8 MB

Same as above, but a better view of the change of the throughput after 400 seconds.

 

Scalable TCP and Reno TCP from SLAC to CERN

Window size = 8 MB

Scalable TCP and Reno TCP from SLAC to NIKHEF

Window size = 8 MB

Scalable TCP and Reno TCP from SLAC to APAN

Window size = 8 MB

 

Scalable TCP and Reno TCP from SLAC to Caltech

Window size = 8 MB