More on bulk throughput
Anthony Anthony, NIKHEF Amsterdam, The Netherlands, <firstname.lastname@example.org>
Dr. R. Les Cottrell, MS 97, Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, California 94025, <email@example.com>
This project is designed to demonstrate the current data transfer capabilities to several sites with high performance links, worldwide. In a sense the site at iGrid2002 is acting like a HENP tier 0 or tier 1 site (an accelerator or major computation site) in distributing copies of the raw data to multiple replica sites. The demonstration will be over real live production networks with no efforts to manually limit other traffic. The results will be displayed in real-time. Researchers investigate/demonstrate issues regarding TCP implementations for high-bandwidth long-latency links, and create a repository of trace files of a few interesting flows. These traces, valuable to projects like DataTAG, help explain the behavior of transport protocols over various production networks.
The physics part of the demo will use press cuttings illustrating BaBar SLAC aerial view, PEP II beamline, Moving the coil from Italy by C5A aircraft, single event displays, S.F. Chronical front page article on Babar discovery, SSRL aerial photo, Polymerase molecule from Science front-page, the first web page in the U.S. from 1991, a KRON news clip on the Babar database being the largest known database in the world, the growth & current size of the BaBar database. BaBar database).
To support these high throughput requirements, we are measuring TCP and file copy throughputs to many of the sites from 9 sites in 4 countries (including iGrid2002) to over 35 hosts in 8 countries. Some throughputs are seen here. We used the standard TCP stack with regular MTUs. We optimized the window sizes and number of streams by running Iperf.TCP for 10 seconds to each remote host from iGrid2002 for a range of windows and streams. By looking at the graphs we selected the window size for the minimum number of streams that gave about 80% of the maximum throughput result. We also record the routes and ping Round Trip Times (RTTs), losses, and derived throughputs among other metrics, the Ethernet interface transmit and receive Mbytes/second as well as providing animations of RTT, loss and derived throughput measured by ping and throughput measured by iperf measured from iGrid2002 and from SLAC, and a Java applet to show ping RTTs to the world in real-time from your computer (see the mock-up in case you cannot load the applet onto your computer).
Our first demo was scheduled for 9am Tuesday morning September 24, 2002. The formal demonstration took about 10 minutes and was videotaped. We used one screen to introduce the physics needs. This evolved to screen shots showing the historical and future data and bandwidth requirements. This led to the need to manage and understand how to replicate data to multiple sites and the building of a high throughput measurement infrastructure to help address these needs. We illustrated this by the PingWorld applet, the Available Bandwidth Estimation (ABE) servlet, the animated iperf world map, the traceroute topology for iGrid2002/IEPM and plots of ifconfig throughputs as a function of time as we ran the bandwidth tester in sequential mode on one host (keeshond) and in flood mode on the 2nd host (stier). We also ran a few iperfs from a 3rd host (haan) to a few high performance hosts. The aggregate throughputs measured at the router was over 2Gbits/s.
The second demo slot was on Thursday 26, 2002. We had 5 hosts: keeshond, stier, haan, hp3 and hp4 all running Linux and all with 2 * 1 GE interfaces. We did not have time to get the 2nd GE interfaces working on the hosts. Keeshond was set to make sequential tests (iperf, bbcp and bbftp) except that instead of calling the script to drives the tests (run-bw-tests) from a cron job, it was called from a continous loop. Each of the remaining 4 machines ran iperf in TCP mode simultaneouly to different sets of about 6 hosts. The hosts in each set was determined from an an earlier snapshot of the throughputs. We measured the load generated noting read (RX) and transmit (TX) bytes reported by ifconfig for the appropriate Gigabit Ethernet interfaces, every 2 seconds and calculating the differences. This was accomplished and displayed using a java servlet. With this setup we were able to achieve the following throughputs:
Some improvements suggested for the demos were: to display the configurations etc. of the hosts being accessed in the flood mode; provide an aggregate (for all hosts) throughput real-time time series plot independent of the central network's MRTG router displays.
Lessons learnt & value of iGrid2002 as staging event for SC2002.
We will be using the Internet 2, ESnet, JAnet, GARR, Renater, SURFnet, Japanese WANs and the CERN-STARTAP link.
The work has been sponsored by:
Offsite resources will be at the sites listed in the table below. Each site will have one or more Unix hosts running iperf and bbftp servers.
Ayumu Kubota, APAN-JP <firstname.lastname@example.org>
Linda Winkler, ANL, US, <email@example.com> + William E. Allcock [firstname.lastname@example.org]
Dantong Yu, BNL, Long Island, US, <email@example.com>
Harvey Newman, Caltech, Pasadena, US, <firstname.lastname@example.org> + Julian J. Bunn [email@example.com] + Suresh Singh <firstname.lastname@example.org>
Olivier Martin, CERN, Geneva, CH, <email@example.com> + Sylvain Ravot [Sylvain.Ravot@cern.ch]
Robin Tasker, Daresbury Lab, Liverpool, UK, <R.Tasker@dl.ac.uk> + Kummer, P. S (Paul) [P.S.Kummer@dl.ac.uk]
Jim Leighton, ESnet, Berkeley, US, <JFLeighton@lbl.gov>
Ruth Pordes, FNAL, Chicago, US, <firstname.lastname@example.org> + Frank Nagy <email@example.com> + Phil DeMar <firstname.lastname@example.org>
Andy Germain, NASA/GSFC, US, <email@example.com> + George Uhl [firstname.lastname@example.org]
Jerome Bernier, IN2P3, Lyon, FR, <email@example.com> + Dominique Boutigny [firstname.lastname@example.org]
Fabrizio Coccetti, INFN, Milan, IT, <Fabrizio Coccetti [email@example.com]>
Emanuele Leonardi, INFN, Rome, IT, <Emanuele.Leonardi@roma1.infn.it>
Guy Almes, Internet 2, US, <firstname.lastname@example.org> + Matt Zekauskas <email@example.com> + Stanislav Shalunov <firstname.lastname@example.org> + Ben Teitelbaum <email@example.com>
Chip Watson, JLab, Newport News, US, <firstname.lastname@example.org> + Robert Lukens <email@example.com>
Yukio Karita, KEK, Tokyo, JP, <firstname.lastname@example.org>, Teiji Nakamura <email@example.com>
Wu-chun Feng, LANL, Los Alamos, US, <firstname.lastname@example.org>, Mike Fisk <email@example.com>
Bob Jacobsen, LBL, Berkeley, US, <Bob_Jacobsen@lbl.gov>, Shane Canon <Canon@nersc.gov>
Richard Hughes-Jones, Manchester University, UK, <firstname.lastname@example.org>
Anthony Anthony, NIKHEF, Netherlands, <email@example.com>
Tom Dunigan, ORNL, Oak Ridge, US, <firstname.lastname@example.org> + Bill Wing <email@example.com>
Richard Baraniuk, Rice University, <firstname.lastname@example.org>, Rolf Riedi [email@example.com]
Takashi Ichihara, RIKEN, Japan, [firstname.lastname@example.org]
John Gordon, Rutherford Lab, Oxford, UK, <J.C.Gordon@RL.AC.UK> + Adye, TJ (Tim) [T.J.Adye@RL.AC.UK]
Reagan Moore, SDSC, San Diego, UA, <moore@SDSC.EDU> + Kevin Walsh [kwalsh@SDSC.EDU] + Arcot Rajasekar <sekar@SDSC.EDU>
Warren Matthews, SLAC, Menlo Park, US <email@example.com> + Paola Grosso <firstname.lastname@example.org> + Gary Buhrmaster <email@example.com> + Connie Logg <firstname.lastname@example.org> + Andy Hanushevsky <email@example.com> + Jerrod Williams <firstname.lastname@example.org> + Steffen Luitz <email@example.com>
Warren Matthews, Stanford University, Palo Alto, US, Milt Mallory <firstname.lastname@example.org>
William Smith, Sun Micro Systems [William.Smith@sun.com], Rocky Snyder email@example.com
Andrew Daviel, TRIUMF, Vancouver, CA, <firstname.lastname@example.org>
Yee-Ting Li, University College London, UK, <email@example.com> + Peter Clarke <firstname.lastname@example.org>
Constantinos Dovrolis, University of Delaware, US, <email@example.com>
Paul Avery, University of Florida, Gainesville, US, <firstname.lastname@example.org> + Gregory Goddard [email@example.com]
Thomas Hacker, University of Muchigan, <firstname.lastname@example.org>
Joe Izen, University of Texas at Dallas, US, <email@example.com>
Miron Livny, University of Wisconsin, Madison, US, <firstname.lastname@example.org> + Paul Barford <email@example.com> + Dave Plonka <firstname.lastname@example.org>>
Created August 17, 2002; last update August 17, 2002.
Comments to email@example.com