The Summary Table summarizes the results.
Throughput available on links changes from month to month as links are
upgraded to higher capacity (compare the measurements made with CERN, INFN
IN2P3 in February 2000 with those made in September 2000), or a
different path and
service provider is chosen (compare SLAC to Caltech
via ESnet with via I2 (Internet 2)). This improvement in throughput
can be seen in the graph to the right.
- Thruput can vary with factors of 3 to 5 from one minute to the next.
Utilization measurements made at 5 minute intervals will not be able
to shed much light on such changes. Since the variation is oten due to causes
over which we have no control, we have found it necessary to make multiple
measurements in order to gather sufficent statistics to make the results
- The thruput of individual streams is similar, usually within
a factor of 2 of one another.
- It can be seen that pathchar does a reasonable job of predicting the
bandwidth and thus what TCP thruput the link can carry. Not that in the case of
SLAC to SLAC and the NTON tests the Predicted Bottleneck column entry
is the nominal link capacity and not that measured by pathchar.
- For most cases one does not get close to the maximum thruput with only one
stream regardless of window size.
- On the other hand it appears to be possible to get close to the maximum
thruput even with small windows provided there are a sufficient number
of parallel streams.
- It appears that for many links, increasing the number of streams
is usually more effective
than increasing the window size. This may be
due to several factors:
- Multiple streams with small windows may be more agile in
responding to changing congestion of the link than a few streams with large
windows. For more on this see
Bulk thruput: windows vs streams.
- Multiple streams may be more effective when the cross-traffic exceeds the
test iperf traffic. When the iperf test traffic dominates then it is more likely
that the congestion window size adjustment feedback mechanisms of TCP will
improve the performance of the iperf stream.
If on the other hand the iperf test thruput traffic is much smaller than the
cross-traffic at the bottleneck then throttleing back its thruput by
reducing window size will have little effect, and it may never be able
to use the large windows (from a discussion with Matt Mathis of PSC) .
- We may be seeing some failure of the TCP
congestion algorithms (e.g. due to packet re-ordering,
see for example
Packer Reordering is Not Pathological
Network Behavior, Jon C. R. Bennett, Craig Partridge and
Nicholas Shectman, IEEE/ACM Transactions on Networking ,
Vol. 7, No. 6, December 1999, p789 and
Packet reordering, the latter
evaluates the prevalence or reordering for about 250 hosts
monitored by PingER) or due to
competition with other traffic on a congested link.
- The need for multiple streams to improve performance puts
an extra burden on
the applications being able to support asynchronous I/O.
- Opening multiple streams may be considered overly aggressive and
not "TCP compatible" (see
versions of TCP may use the concept of flows to aggregate multiple
streams to help with congestion management.
- On the other hand it may be easier to set up multiple streams than to
set up a different version of the kernel that will support the larger windows.
Also as pointed out to me by Jason Leigh of University of Illinois, Chicago,
there are libraries emerging that will make parallel sockets simpler. The
for example provides both a parallel tcp implementation and a non-parallel
implementation with the same API calls (except where you have to give a
parameter for # of sockets).
A problem with parallel sockets is knowing how many sockets to open.
That will change depending on available bandwidth at any given moment in time.
As parallel sockets drive the network to saturation
performance appears to fluctuate more and more (see for example
Bulk thruput: windows vs. streams).
- The improvements available by increasing the window
size and the number of streams are in the range a factor of 5 to a factor
- Using iperf with Linux is limited by the inability to use window sizes of
- Throughputs may be
asymmetric, i.e. the throughput may be noticeably
greater in one direction than the reverse.
[Top of page]
[Top of page]
Many people contributed to the above information. In particular we thank:
Gilles Farrache of IN2P3, Lyon France for getting us an account there, Loric Totay of IN2P3
for information on how to discover the hardware configuration of a
Solaris host and Jerome Bernier [firstname.lastname@example.org] for getting us access to
a special test machine at IN2P3;
Dantong Yu of BNL for getting us logon
access to a machine at BNL and useful information on the network
configuration and security at BNL; Brian Tierney of LBL for useful discussions on
achieving high throughput; Robin Tasker and Paul Kummer at DL for assistance in getting an account
at DL; Olivier Martin of CERN for setting up the account and for many interesting
[Top of page]
Sample Bulk Throughput Measurements
Cal Tech | CERN |
Colorado | IN2P3 |
INFN |DARESBURY |
Stanford to DARESBURY |
ANL |BNL |
LBL |SLAC To Stanford's Campus
BACK TO MAIN PAGE
Back to top
Created August 25, 2000, last update August 29, 2001.
Comments to email@example.com