"Extreme Bandwidth": 

SC2002 Bandwidth Challenge Proposal

More on bulk throughput
Bulk throughput measurements | Bulk throughput simulation | Windows vs. streams | Effect of load on RTT and loss | Bulk file transfer measurements | FAST TCP Stack Measurements | QBSS measurements | SC2001 challenge | iGrid2002 demonstration | SC2002 SLAC/FNAL | SC2002 Measurements | SC02 Weather Map | SciNet

PIs & Contacts | Project Description | Detailed Technical Requirements | Photos | Log  | Poster

Cisco Systems, Inc.(R)

StarLight logo

Principal Investigators:

Dr. R. Les Cottrell, MS 97, Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, California 94025, <
Prof. Harvey Newman, Caltech, Pasadena, California <>
Other contacts . Shipping & contacts: CERN, StarLight, Sunnyvale

Project description

This is a joint SLAC, Caltech and CERN project with Level 3, Cisco and StarLight as sponsors. We wish to demonstrate high network throughput on trans-continental (10 Gbits/s) and trans-Atlantic (2.5 Gbits/s) links between Sunnyvale, CERN/Geneva and SC2002/Baltimore. We hope to demonstrate high (> 1 Gbit/s) disk-to-disk throughput and even higher  (several Gbits/s sustained) memory-to-memory throughputs between the above sites. The initial showcase will be at SC2002  in Baltimore November 18-21 2002.


Our first tests were made iperf TCP measurement using the Caltech FAST TCP stack, from 6 hosts from SC2002 to Sunnyvale. We achieved about 5Gbits/s. The 3 hosts in the SLAC booth delivered about 3Gbits/s according to the router SNMP statistics. For the formal challenge we ran iperf to Sunnyvale from 8 hosts in the Caltech booth and 2 in SLAC booth, and to Chicago from 5 hosts in SLAC booth. We started up the challenge around 9pm, soon after we lost power in SLAC booth, recovered, and achieved about 11.5Gbits/s sustained to the 2 links acording to the SCinet measurements. The Sunnyvale link was getting about 8-9Gbits/s. The official SCiNet results indicated that LBNL won the bandwidth challenge achieving over 16Gbits/s and the Caltech/SLAC challenge was second with a 12.44 Gbits/s peak and 10.67Gbits/s sustained over 15 minutes. We (NIKHEF, SLAC & Caltech) also used the tetstbed to win the Internet 2 Land Speed Record.

Detailed technical requirements


Routing plan, Chicago - Amsterdam network setup, DataTag Testbed, Overall including Sunnyvale, 10GE setup

Overall, Chicago (6*2.2GHz PCs,2,3,4,5,6), CERN. (6*2.2GHz PCs,,2,3,4,5,6), DataTag, Sunnyvale (12 PCs + 4 disk servers, addresses), SC2002 (5PCs, addresses), SARA (4 HP 2*2.4GHz PCs,17,18,19)

DataTag reservation form available to DataTag users following the rules.


Router Statistics


The demonstration will utilize a loaned Level 3 OC192 POS (10Gbits/s) circuit from StarLight in Chicago to the Level 3 gateway at 1380 Kifer Road, Sunnyvale. In Sunnyvale Cisco has loaned us a GSR 12406 router with an OC192 POS and 20 1GE interfaces. Also in Sunnyvale we will have 2 racks of colocation space loaned from Level 3. The GSR will go in one rack, and in the second rack will go 12 Linux servers plus a RAID disk farm to be provided by Caltech. At Baltimore and CERN there will be similar setups. At StarLight we will be connecting to a Juniper 640 router and then to Baltimore (10Gbits/s) and CERN (2.5 Gbps). . We will have the GSR on a 90 day loan and Level 3 will leave the circuit lit for 30 days, with the possibility of negotiating for longer. We will have an estimate of the turn up date from Level 3 on the afternoon of Monday 4th November.


  1. Traffic between CERN, Starlight and Sunnyvale: We are connected at 10 GbE to the Juniper T640 managed by Linda. By default the traffic is not routed via this 10 GbE link. In order to route this traffic via the 10 GbE link, I need to know the address of the subnet at Sunnyvale.
  2. Traffic between CERN, Starlight and SLAC: We have two connections to Abilene: one at 1 Gbps for production and one at 10 Gbps for tests (Via the juniper T640). By default, the traffic is routed via the production peering. In order to route tests traffic via the 10 GbE link, I need to know from which machines (from which subnet) you are conducting tests at SLAC. Please note that if you reach the Starlight via ESnet, we cannot route the traffic via the 10 GbE link.
  3. Linda  assigned the following /30 for the pt-to-pt link CHI/T640 SLAC
    Its set up for static routing.

Equipment for Sunnyvale

1 GSR model 12406 10 rack units, 6 slots, draws 16A, need 20Amps/120V AC circuit
OC192/POS interface + 20 * 1 GE interfaces, plus 20 850nm multimode
1000Base-SX small form factor pluggable (SFP) GBICs. Parts list.
Weight of 12406 chassis & power supplies is 140lbs, 10 port GE card is 10lbs, OC192 card is 9.5lbs, route processor 6lbs. Overall weight ~ 180lbs with GBICs & cables.
OC192 POS connection 1310nm SC connector at Sunnyvale.; OC192 POS connection 1550nm SC connector at StarLight

12*1 computer server ea 1 rack unit & 2.5Amps, 120V
Model: ACME Server 6012PE
Motherboard: Supermicro P4DPR-I
CPU : Intel 2.4 GHz
Memory : 1 GB PC2100 DDR ECC Registered
Hard Drive : 80GB IDE, Maxtor, 7200 RPM

4 disk servers ea with 16 IDE drives, 4 rack units & 50 Amps, 480W to run 600W to spin up
weight 90 lbs/server
Dual P4 2.4 , e7500 Chipset, dual gigabit should be approximately 200W.
8 disks of 120 GB on each ATA RAID array; two such arrays per server.
PCIX 1/2/3/6 33MHz, PCIX 4 66MHz and PCIX 5 100MHz. Slot occupancy: 1 & 3 had a RAID controller, slot 3 and 6 a Syskonnect (1GE) NIC, and slots 3 and 6 were empty.
Need 2 racks (rack has 42 units),
1 for GSR utilize 10 rack units and 20Amps/120V
1 for servers utilize 28 rack units and 50Amps/110V
Need punch outs between cabinets.

People needing access to Level 3 Gateway in Sunnyvale, access procedure.

Photo Scrapbook


Created October 31
Comments to