Project Report†† Date: February 6, 2002
Internet End-to-end Performance Monitoring (IEPM)
Project Type: Base
PI: R. Les Cottrell, PhD††††††††††††††††††††††† Institution: Stanford
Linear Accelerator (SLAC)
In the last half year the IEPM-PingER
project has evolved to create the IEPM-BW project to add bandwidth network and
application bandwidth monitoring. The popular PingER
measurements continue to be supported, and are expanding in coverage, and are
vital to the worldwide collaborations of HENP and other disciplines.
Progress with collecting high throughput measurements with
IEPM-BW has been excellent, and we are now reliably making continuously
available measurements on network and application throughput from SLAC to about
32 remote hosts in 8 countries. Ambitious plans are underway to use the data for
prediction and application steering, and research and to tie this in more
closely with various Grid and HENP activities.
The graphs produced by PingER have
been improved to make them faster.† We
have added metrics: duplicate packets, out of order packets, jitter, minimum
RTT, and conditional loss probability. We have defined ~ 45 affinity groups,
cover 73 countries and have 37 monitoring sites (including all PPDG sites). FNAL
have totally revamped the FNAL PingER ping data
archive to improve robustness, ensure more reliable collection of data, speed
the graphing up and add new facilities.†
Data has been shared between the SLAC and FNAL archive sites to provide
the maximum coverage. We have added an extra 5 monitoring sites (SDSC, LANL,
Rice, Milan, Trieste). Operational issues
include tracking and working with the contacts for† monitoring hosts that are having problems
providing data, identifying ping rate limiting or blocking and working with the
contacts at the remote sites to get alternative hosts.
We provided PingER ping data to
the Network Weather Service (NWS) and Daresbury Lab
and worked with them to provide NWS predictions.
To understand the factors that affect high network and
application throughput for disciplines such as the Grid or High Energy and
Nuclear Physics (HENP), we undertook to make measurements of throughput between
SLAC and various collaborators with high speed links. This is reported on at http://www-iepm.slac.stanford.edu/monitoring/bulk/index.html
for TCP and http://www-iepm.slac.stanford.edu/monitoring/bulk/bbcp.html
for file copy/transfer.
We worked with Stanislav Shalunov of Internet 2 to understand the QBone Scavenger Service (QBSS) and then built testbeds (with a 10 Mbit/s and a
100Mbit/s bottleneck) to evaluate the utility and performance. This is reported
on in http://www-iepm.slac.stanford.edu/monitoring/qbss/measure.html.
We found that it is very effective in using the available bandwidth, that it
enables the scavenger service to back of quickly (<~ 1 sec) in the face of
congestion from higher priority services (e.g. best effort), and that it can
reduce the impact on delays for other competing applications with higher
priorities. We have written an article on QoS
experiences for CENIC which is planned to be published in the next couple of
The bbcp secure file copy tool that supports large windows
and multiple streams has been enhanced to support network measurements. The
following features have been added: memory to memory copies as well as disk to
disk; periodic reporting of incremental and cumulative throughputs;
self-rate-limiting; setting DiffServ Control Point (DSCP)
values for QoS testing. A paper entitled Peer-to-Peer
Computing for Secure High Performance Data Copying was published †at CHEP01
on bbcp by Andy Hanushevsky,
Artem Trunov & Les
We installed and configured the Cisco Netflow
to make passive measurements on flows at the SLAC border. We gathered the Netflow records and for large flows (> 10Mbits/s) we
validated that passive Netflow throughputs give
reasonably good agreement with iperf TCP network
throughput measurements. This is documented at http://www.slac.stanford.edu/comp/net/netflow/thru.html
For SC2001, we proposed and had accepted a demonstration
entitled Bandwidth to the World (see http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2001/
). Briefly the deomstration emulated an HENP tier 0 accelerator site distribution large volumes
of data to about 25 collaborator sites. This required getting logon accounts at
the remote sites, building tools to run on 3 PCs at SC2001 to send the data,
record the throughput, analyse and present the
results via the web. We also had to understand and measure the optimum
configuration values to use (windows and streams, cpu speeds and OSí etc.), see http://www-iepm.slac.stanford.edu/monitoring/bulk/window-vs-streams.html.
We achieved a sustained 1.6Mbits/s over a 2Gbit/s link to 17 sites in 5
countries. Since we were able to congest the link we were also able to
demonstrate QBone Scavenger Service (QBSS) working at
2 Gbit/s rates. For the SLAC/FNAL booth itself we put
together demonstrations of the IEPM measurement project including RTT to the
work (PingER) and higher throughput measurements
(IEPM-BW), animated bar charts on top of a world map which now are part of the
Atlas of Cyberspace (see http://www.cybergeography.org/atlas/geographic.html).
Bandwidth Measurement Project (IEPM-BW)
Following on from the SC2001 project we extended and ruggedized
the infrastructure put in place for the SC2001 bandwidth challenge. We have built a simple database to track the configurations (OS, directory paths, options needed to execute measurement tools, which commands work, contact people and to allow anonymization
of the host for reporting purposes) for each host being monitored. In addition to this we have developed a simple problem tracking mechanism. At the end of the current period we have about 28 sites in 8 countries, and are making regular measurements with ping, traceroute
(both memory to memory and disk to disk), bbftp
We are starting to analyze the data from these measurements. Early results from the IEPM-BW project indicate:
- Reasonable estimates of throughput can be obtained with 10
second measurements. This is typically much shorter than pipechar measurements.
- In many cases it is not sufficient to simply increase the window size
to achieve high throughput, multiple parallel streams are also critical.
- Careful attention to window sizes and parallel streams in necessary.
- Improvements of between 5 and 60 times have been observed for the optimum window and stream settings compared to using a single stream and the default maximum window size.
- It is also observed that there is an optimum window*number
parallel streams beyond which performance does not increase,
or may decrease, while, packet loss increases.
- Throughput can vary by an order of magnitude with time of day or day of week etc.
- Roughly speaking one needs 1 MHz to provide 1
Mbit/sec on today's CPUs and OSs
- The bbcp file copy rates from memory to memory are about 60+-20% of the
- File copy rates disk to disk are typically about 90% of the
memory to memory rates, for rates below 60Mbits/s, but can
vary depending on disk performance, caching etc.
Uncached disk performance typically tops out at between 4 and 8MBytes/sec.
- In some cases (e.g. SLAC to CERN for Objectivity data compression
can improve throughput by over a factor of 2 on a reasonably high performance
host (e.g. Sun 336MHz cpus).
- When running high throughput applications, the RTT for other users
can be noticeably increased, e.g. for SLAC to CERN the average
increases from about 160 The impact of high throughput applications, on other applications
requiring low latency, may be reduced by applying lower than best
effort priority (Scavenger Service) to the high throughput applications'
We have created a web site organized to provide easy access
to all aspects of this project, see
Presentations in last 6 month period
- IEPM/Pinger, the Next Generation (or PingER
on Steroids), presented by Les Cottrell at the Internet 2 HENP
Network group meeting, Tempe AZ,
Jan 30 '02.
- IEPM/Pinger, the Next Generation (or PingER
on steroids), presented by Les Cottrell, at the DoE SciDAC PI meeting,
Washington Jan 15-17 '02.
end-to-end high throughput application & network performance
measurements or Son of PingER, presented
by Les Cottrell at the DARPA/DoE NGI Joint PI
meeting Washington Jan '02.
throughput network performance measurements, talk by Les Cottrell
at the BaBar collaboration meeting at SLAC, Dec
Performance Measurements presented by Les Cottrell at the
ICFA/SCIC meeting at CERN December 2001.
high perfomance throughput in production
networks, presented by Les cottrell at
the Inaugural Internet 2 HENP Network Working Group, Ann
high performance throughput in production networks, presented by Les cottrell
at the Fall '01 ESCC meeting at ANL, October 2001.
- Grid Monitoring powerpoint or PDF,
presented at the Internet2 HENP Network Working Group, Ann
Applications and PingER Futures, presented by Les Cottrell at the Fall '01 ESCC
meeting at ANL, October 2001.
- Grid Monitoring powerpoint or PDF,
presented at the Fall '01 ESCC meeting at ANL, October 2001.
Applications presented by Les Cottrell at the Virtual Internet 2
Member Meeting WG on QoS, October 4, 2001.
End-to-end monitoring presented by Les Cottrell at the ESnet
Review meeting in Santa Fe,
high throughput on production research and education networks,
presented by Les Cottrell at the Computing in High Energy Physics
Conference, Beijing September
performance monitoring and traffic characteristics at the SLAC Internet
border, presented by Les Cottrell at the Computing in High
Energy Physics Conference, Beijing September 2001.
- IPv6 in ESnet presented at CHEP01, Beijing,
China, September 2001.
Summary of Networking Sessions, presented by Les Cottrell at
the Computing in High Energy Physics Conference, Beijing
- Network Monitoring: grid
network performance measurement, simulation and analysis (powerpoint or pdf), presented at the Global Grid Forum (GGF),
July 2001. Also, slides from the networking bof
(powerpoint or pdf).
Future Accomplishments (next 6 months)
We are in the process of improving the analysis and reporting/graphing/table tools in particular in the areas of robustness, manageabilityand portability. We are also building tools to facilitate and automate the infrastructure management. This includes downloading of code, checking whether measurements are successful, gathering the remote configurations parameters (OS, cpu speed, code versions), understanding disk performance, verifying windows and streams are set correctly.
We will measure the impacts of compression, add and understand gridFTP and bandwidth measurement tools such as pathrate, and compare and contrast the various measurements. We will integrate the Netflow measurements with the IEPM-BW measurements and also integrate the PingER analysis and graphical and tabular presentation tools with the IEPM-BW measurements. We also hope to tie together the measurements being made in the UK with the SLAC measurements so they appear more integrated to the user.
We will make the IEPM-BW data available to interested, friendly developers and researchers for example for validating data and algorithm applicability, and forecasting. We will document the format of the data, and assist the developers and researchers in the analysis.
We will use the IEPM-BW and Netflow measurements to make simple forecasts of the performance, and look at how to tie these into an application such as bbcp. As this work evolves we will collaborate with the NWS project to do more sophisticated forecasting.Following this we will select a representative minimum subset of tools to make measurements with, improve the reporting/graphing/table tools and make the data available via the web. We also hope to deploy a 2nd measurement host in the next 6 months.
We are working with researchers at:
LBNL (Guojun Jin) to
validate, understand and improve pipechar.
U Delaware (Constantinos
Dovrolis) to understand,
improve and validate pathrate. When it is ready we
will plug it into the IEPM-BW infrastructure to make detailed comparisons and
determine its area of applicability. When pathload is
ready we will work with davrolis on it.
CAIDA (Margaret Murray & kc
claffy), we will share IEPM-BW data with them to
enable further research on the validity of various bandwidth measurement tools
We are working with researchers at Rice to
evaluate the INCITE multifractal path measurements
tools to integrate them into IEPM-BW to gather more extensive measurements for
UCSB, Rich Wolski, to
provide him with data from IEPM-BW and collaborate on the research and
development needed to use the measurements for forecasting.
As can be seen from the lists of research interactions
above, and also from the number of collaborating sites (~30), it is apparent
that there is a great deal of interest in this project, since it quickly
promises to have network AND application throughput data on a persistent time
frame, with measurements made close to the application, and results made
available publicly. This is of interest to planners, researchers, Grid
applications developers and users, people with high throughput requirements
such as the High Energy and Nuclear Physics (HENP) community. Besides
participating at SC2001 and in the SC2001 Bandwidth Challenge, we are looking
to collaborate with CERN and NIKHEF at the iGrid2002 in Amsterdam
in September 2002. The IEPM project is also a formal collaborator with the
Particle Physics data Grid (PPDG) and is located at the site of BaBar a major HENP physics experiment, which greatly
assists in getting ideas of what features are important to the users, and
providing input to the users on what to expect.
The outstanding success of the PingER
ping measurement project (over 2000 web hits per day) in providing long term
(over 7 years), continuous, publicly available measurements, analysis, also
gives credibility to the success of the follow on IEPM-BW project.