Retrieving PingER Historical Data

SLAC Home Page

Retrieving Data

Since it's inception in 1995, the PingER project has amassed more than 16 years worth of historical ping data from several tens of monitoring sites around the world to many hundreds of remote sites in over 160 countries. PingER provides some canned tabular and graphical reports. If further insight is needed then the PingER data has/can be accessed and used by collaborators and others in the industy to research network related theories. When access to archived data is needed for a user's research/use, there are a few options available for the user to selecting the data to download. When using this data, please acknowledge the IEPM-PingER project.

PLEASE NOTE: All times associated with the rawdata downloaded from PingER are measured with respect to GMT.

  1. As a user you can visit our online distribution of data accessible at, selecting various options in the form based on the user's needs, to view the data. From the web page, the data is available for download in tab-separated-value (.tsv) format which makes it easy to import into Excel. Another option via the Pingtable is that the user can write a script to use an HTTP GET with the appropriate QUERY_STRING in the URL and work with the data locally:
    The example above will get you the hourly data from SLAC to all the sites in the world for Feb 22, 2004. Stepping through all the days for the last X days will get you all the data per the request. It will come back in tab separated format which can be imported into Excel or your favorite spreadsheet application.

  2. If you as a user are looking for a large amount of collective data for example from all monitoring sites for all time monitored based on one or more specific metrics (see fig.1), please contact us by email. Please describe what data (metric, ping packet size (100 or 1000 Bytes), whether the remote (monitored) nodes should be aggregated by site or independent) you need and what your intended use is. We will get back to you and probably prepare a zipped tar file of the data and make it available by anonymous ftp. Contained in this tarball are zipped files, one for each day, for each metric, for each packet size (100 or 1000). If no metric is specified, only a tarball of the 'average_rtt' for 100 Byte pings for each host (not aggregated by site). will be returned. The tarball will be stored for a few days in a read only FTP directory where you can simply perform an FTP download of the data. A typical tarball for one metric is over a GByte.

    Once the tarball is detarred and the files unzipped, you will find that the individual files are space separated and contain a line header with the hour numbers. Each following line contains the structure:
    source_host_name destination_host_name metric_for_1sthour ... metric_for_nthhour source_host_name destination_host_name (see fig. 2).

    Figure 1: Metrics

    average_rtt/                   out_of_order_packets/
    conditional_loss_probability/  packet_loss/
    duplicate_packets/             throughput/
    ipdv/                          unpredictability/
    iqr/                           unreachability/
    minimum_packet_loss/           zero_packet_loss_frequency/
    minimum_rtt/                   maximum_rtt/
    MOS/                           alpha/
    See the Tutorial for more on the metrics.

    Figure 2: Example of format

    For example the 1st 2 lines of file: packet_loss-100-by-site-2004-02-24.txt
    0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
    (i.e. the losses were always 0.000, no packets lost out of 10.)

  3. You can also use, to choose the monitoring and remote host pair of interest and click on the ? to use to find out more about the pair. From the page you can click on Add CSV file to get the min/avg/max RTTs for the specified dates and times for the specified monitor-remote site pair, plus a time series plot.

  4. If the user is in need of specific rawdata values (as gathered by from the monitoring hosts) seen from SLAC you can use HTTP GET and a URL like: to enter the begin and end dates for which you are searching for data. The user can also enter
    and write a script to get the specific data needed. NOTE: You will have to replace dd,mm, and yyyy respectfully to reflect the begin/end dates you are requesting data for.

    The contents of each line returned for the request are as follows:

    source_host_name source_host_addr destination_host_name destination_host_addr size unix_epoch_time sent rcvd min avg max seq_rcv(i=1,rcvd) rtt_rcv(i=1,rcvd) 
    Where: For example: 100 1077235276 10 10 28.682 32.445 35.427 1 2 3 4 5 6 7 8 9 10 31.4 32.4 33.3 35.1 35.4 34.9 32.6 28.6 31.2 29.1
    There are more details on the Monitoring Data Format. The raw data is saved locally at slac in files of the form: /nfs/slac/g/net/pinger/pingerdata/hep/data/<node>/ping-<yyyy-mm-dd>.txt.gz for example /nfs/slac/g/net/pinger/pingerdata/hep/data/

Revised 12 September 2014.
Comments to