SLAC Home Page

SC2002 :
Bandwidth to the World
SC2001 Bandwidth Challenge Formal Measurements

Historical Performance Reports
| Round trip time | Loss | Derived bandwidth | Conditional loss | IPDV | Duplicate packets

Performance Maps
Iperf | Ping RTT & Loss 

More on bulk throughput
Bulk throughput measurements | Bulk throughput simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements | QBSS measurements| SC2001 challenge | iGrid2002 demonstration | SC2002: Extreme Bandwidth |

Page Contents
Primary Contact | Project Description | Detailed Technical Requirements | Site Contacts  

IEPM

 

 


Primary contact

Dr. R. Les Cottrell, MS 97, Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, California 94025, <cottrell@slac.stanford.edu>

Project description

The unprecedented avalanche of data already being generated by and for new and future High Energy and Nuclear Physics (HENP) experiments at Labs such as SLAC, FNAL, KEK and CERN is demanding new strategies for how the data is collected, shared, analyzed and presented. For example the SLAC BaBar experiment and JLab are each already collecting over a TByte/day, and BaBar expects to increase by a factor of 2 in the coming year. The Fermilab CDF and D0 experiments are ramping up to collect similar amounts of data, and the CERN LHC experiment expects to collect over ten million TBytes. The strategies being adopted to analyze and store this unprecedented amount of data is the coordinated deployment of Grid technologies such as those being developed for the Particle Physics Data Grid and the Grid Physics Network. It is anticipated that these technologies will be deployed at hundreds of institutes that will be able to search out and analyze information from an interconnected worldwide grid of tens of thousands of computers and storage devices. This in turn will require the ability to sustain over long periods the transfer of large amounts of data between collaborating sites with relatively low latency. The Bandwidth to the World project is designed to demonstrate the current data transfer capabilities to several sites with high performance links, worldwide. In a sense the site at SC2002  is acting like a HENP tier 0 or tier 1 site (an accelerator or major computation site) in distributing copies of the raw data to multiple replica sites.

The main part of the demonstration will be to show the achievable throughput of various applications from SC2002 to over 35 host in 8 countries. For this we will use real live production networks with no efforts to manually limit other traffic and no attempt to use MTUs > 1500Bytes. We also will demonstrate the effectiveness of QBone Scavenger Service (QBSS)  in managing competing traffic flows and on the response time of lower volume  interactive traffic on high performance links.

On the SC2002  floor we will have a 5-10 high performance Linux hosts each with two Gbit/sec network interfaces connected to a Cisco 65xx series Catalyst switch to be located in the SLAC/FNAL booth. We plan for the switch to have a 10Gbps and a two 1Gbps links to SCiNet and a built in router capability. The hosts will will run various high throughput applications including  iperf and bbcp publication, man pages) a secure peer-to-peer high performance copy program supporting large windows, multiple streams, and bbftp a secure FTP program that supports large windows and multiple streams. These programs will be called from scripts to automate running multiple copies, gathering performance statistics, and reporting in real time and recording the results. We will also have scripts to gather SNMP data from the booth router. The over 35 remote sites are connected by various networks, including Internet 2, ESnet, JAnet, GARR, and Renater. We have identified and contacted sites and identified hosts that are suitable for the demonstration. The requirements for the hosts at the remote sites are fairly limited.

The information gathered will be recorded in files, and analyzed with various tools including Excel. Universal Time History (UTH) utility real time plots of the throughput and the aggregate throughput (see mock-up) to each remote site will be displayed on one monitor in the SLAC/FNAL booth. We will also record the routes and ping Round Trip Times (RTTs), losses, and derived throughputs among other metrics. We will display animations of RTT, loss and derived throughput measured by ping and throughput measured by iperf measured from SC2002 (also see screen-shot), iGrid2002 and from SLAC. One monitor will display a Java applet to show ping RTTs to the world in real-time from your computer (see the mock-up in case you cannot load the applet onto your computer). The other monitor will show available bandwidth estimates using packet pair techniques.

Longer term analysis (non-real time) will be made to summarize and report on the demonstration and will be made publicly available via the web. The web URL for this project will be: http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2002/.

For a second demo at SC2002 we will illustrate the effects of Qbone Scavenger Service (QBSS), i.e. marking our traffic and routing it with LOWER priority than the other traffic. To do this we will use 3 Linux hosts with GE interfaces connected via a switch to a 1GE interface to the SC2002 show floor network. The idea is that two of the hosts will be able to saturate the 1GE interface and will run QBSS traffic. The third host will be alternately on and off, it will run Best Effort (i.e. unmakrked traffic). When the 3rd host runs it ought to be able to drive 600-900 Mbits/s and since its traffic is higher priority than the other 2 hosts, the other 2 hosts should back off. We will display the switch port utilization in real-time to show the effects.

Detailed technical requirements

We will be using the Internet 2, ESnet, JAnet, GARR, Renater, SURFnet, Japanese WANs and the CERN-STARTAP link. SLAC will have an OC12 via ESnet to Sunnyvale and an OC12 via Stanford to CalREN2/Internet 2. The SC2002 SLAC/FNAL booth will have a 10Gbps connection and a 1Gbps connection to SciNet.

Offsite resources will be at the sites listed in the table below. Each site will have one or more Unix hosts running iperf and bbftp servers. We are measuring throughputs to many of the sites from SLAC. We also measure routes and ping Round Trip Times (RTTs), losses, and derived throughputs among other metrics.

Miscellaneous

Installation procedures | Host configurations | Host status

Contacts

Logo Site
APAN-JP, Japan
ANL Illinois, USA

go to home page

BNL New York, USA
Caltech California, USA
CERN Geneva, Switzerland
CESnet CESnet, Prague, CZ
Council for the Central Laboratory of the Research Councils Daresbury Liverpool, UK
FNAL Illinois, USA
NASA Meatball GSFC Maryland, USA
IN2P3 Lyon, France

INFN, Milan, Italy

internet2_logo Internet 2
Jefferson Lab JLab Virginia, USA
LogoSite
KEK Tokyo, Japan
Los Alamos National Laboratory LANL New Mexico, USA
Berkeley Lab LBL California, USA
Manchester University, UK
NERSC California, USA
NIKHEF logo NIKHEF Amsterdam, Netherlands
ORNL Tennessee, USA
Council for the Central Laboratory of the Research Councils RAL Oxford, UK
Rice University Texas, USA
RIKEN, Japan
LogoSite
Rome, Italy
SDSC - a unit of UC San Diego SDSC California, USA
SLAC California, USA
Stanford California, USA
TRIUMF LOGO TRIUMF vancouver, Canada
UCL University College London, UK
U Florida, USA
U Delaware, USA
Back to the UTD homepage UT Dallas Texas, USA
Back to the UIUC homepage U Illinois, USA
U Michigan, USA
University of



         Wisconsin-Madison U Wisconsin, USA

We will be located in the SLAC/FNAL booth.

We only require IP based communications.

No specialized on-show floor equipment is needed for the demonstration.

Contact information for all collaborators

The following are the contacts at the various remote sites.

Ayumu Kubota, APAN-JP <kubota@kddilabs.jp>
Linda Winkler, ANL, US, <winkler@mcs.anl.gov> + William E. Allcock [allcock@mcs.anl.gov]
Dantong Yu, BNL, Long Island, US, <dtyu@rcf.rhic.bnl.gov>
Harvey Newman, Caltech, Pasadena, US, <newman@hep.caltech.edu> + Julian J. Bunn [julian@cacr.caltech.edu] + Suresh Singh <suresh@cacr.caltech.edu>
Olivier Martin, CERN, Geneva, CH, <omartin@dxcoms.cern.ch> + Sylvain Ravot [Sylvain.Ravot@cern.ch]
Robin Tasker, Daresbury Lab, Liverpool, UK, <R.Tasker@dl.ac.uk> + Kummer, P. S (Paul) [P.S.Kummer@dl.ac.uk]
Jim Leighton, ESnet, Berkeley, US, <JFLeighton@lbl.gov>
Ruth Pordes, FNAL, Chicago, US, <ruth@fnal.gov> + Frank Nagy <nagy@fnal.gov> + Phil DeMar <demar@fnal.gov>
Andy Germain, NASA/GSFC, US, <andyg@rattler-f.gsfc.nasa.gov> + George Uhl [uhl@rattler-f.gsfc.nasa.gov]
Jerome Bernier, IN2P3, Lyon, FR, <bernier@cc.in2p3.fr> + Dominique Boutigny [boutigny@in2p3.fr]
Fabrizio Coccetti, INFN, Milan, IT, <Fabrizio Coccetti [f@fc8.net]>
Emanuele Leonardi, INFN, Rome, IT,  <Emanuele.Leonardi@roma1.infn.it>
Guy Almes, Internet 2, US, <almes@internet2.edu> + Matt Zekauskas <matt@advanced.org> + Stanislav Shalunov <shalunov@internet2.edu> + Ben Teitelbaum <ben@internet2.edu>
Chip Watson, JLab, Newport News, US, <chip.watson@jlab.gov> + Robert Lukens <rlukens@jlab.org>
Yukio Karita, KEK, Tokyo, JP, <karita@nwgvax.kek.jp>, Teiji Nakamura <teiji@nwgsun2.kek.jp>
Wu-chun Feng, LANL, Los Alamos, US, <feng@lanl.gov>, Mike Fisk <mfisk@lanl.gov>
Bob Jacobsen, LBL, Berkeley, US, <Bob_Jacobsen@lbl.gov>, Shane Canon <Canon@nersc.gov>
Richard Hughes-Jones, Manchester University, UK, <rich@a3.ph.man.ac.uk>
Anthony Anthony, NIKHEF, Netherlands, <anthony@nokhef.nl>
Tom Dunigan, ORNL, Oak Ridge, US, <thd@ornl.gov> + Bill Wing <wrw@email.cind.ornl.gov>
Richard Baraniuk, Rice University, <richb@rice.edu>, Rolf Riedi [riedi@rice.edu]
Takashi Ichihara, RIKEN, Japan, [ichihara@rarfaxp.riken.go.jp]
John Gordon, Rutherford Lab, Oxford, UK, <J.C.Gordon@RL.AC.UK> + Adye, TJ (Tim) [T.J.Adye@RL.AC.UK]
Reagan Moore, SDSC, San Diego, UA, <moore@SDSC.EDU> + Kevin Walsh [kwalsh@SDSC.EDU] + Arcot Rajasekar <sekar@SDSC.EDU>
Warren Matthews, SLAC, Menlo Park, US <matthews@slac.stanford.edu> + Paola Grosso <grosso@slac.stanford.edu> + Gary Buhrmaster <buhrmaster@slac.stanford.edu> + Connie Logg <cal@slac.stanford.edu> + Andy Hanushevsky <abh@slac.stanford.edu> + Jerrod Williams <jerrodw@slac.stanford.edu> + Steffen Luitz <luitz@slac.stanford.edu>
Warren Matthews, Stanford University, Palo Alto, US, Milt Mallory <
milt@stanford.edu>
William Smith, Sun Micro Systems [William.Smith@sun.com],  Rocky Snyder rocky.snyder@sun.com
Andrew Daviel, TRIUMF, Vancouver, CA, <andrew@andrew.triumf.ca>
Yee-Ting Li, University College London, UK,  <ytl@hep.ucl.ac.uk> + Peter Clarke <clarke@hep.ucl.ac.uk>
Constantinos Dovrolis, University of Delaware, US,  <dovrolis@mail.eecis.udel.edu>
Paul Avery, University of Florida, Gainesville, US,  <avery@phys.ufl.edu> + Gregory Goddard [gregg@nersp.nerdc.ufl.edu]
Thomas Hacker, University of Muchigan, <hacker@umich.edu>
Joe Izen, University of Texas at Dallas, US, <joe@utdallas.edu>
Miron Livny, University of Wisconsin, Madison, US, <miron@cs.wisc.edu> + Paul Barford <pb@cs.wisc.edu> + Dave Plonka <plonka@doit.wisc.edu>


Created August 6, 2002; last update August 8, 2002.
Comments to iepm-l@slac.stanford.edu