Effect of QBSS on QBSS, Best Effort and Priority TCP traffic
To see the effect of various router
configurations on simultaneous
file copies made with QBSS, Best Effort (BE) and Priority TOS code points, we
set up and used a QBSS test network.
On the test network we installed and
a machine called evagore copying files to and from pharlap.
Evagore was a Sun Sparcv9 single processor running at 333Mhz with a 100Mbps
Ethernet running Solaris 5.7. It was connected to the test network which had a
10Mbps connection to the SLAC public network and thence to pharlap.
Pharlap was a Sun E4500 with 6*336 MHz processors, a 1Gbps Ethernet and
running Solaris 5.8. We used
bbcp to make the file copies. To eliminate disk I/O issues we wrote
the file to /dev/null on pharlap, and to allow the copies to run for an
extended period without requiring a large file, we read the data
Each measurement consisted of:
We analyzed and graphed the data using Excel.
- starting a bbcp copy from evagore to
pharlap (we also tested the reverse direction, i.e. initiating
from pharlap a copy from pharlap to evagore, with similar results) with
- about 30 seconds later a bbcp copy was started with BE
- then about 30 seconds later again, a bbcp copy
was started from bbcp with the priority code point;
after about 30 seconds the priority code point file transfer was
- after another 30 seconds the BE copy was stopped;
- after 30 more seconds the QBSS copy was stopped and the measurement
We initially set the QBSS to have 1% of the 10Mbps bandwidth, and the
Priority traffic to have 70%. All bbcp copies used an 8KByte window
and 1 stream, and reported the incremental throughout
for each 3 second interval. The results are shown to the right.
It is seen that the QBS traffic starts out using over 90% of the
10Mbps bandwwidth. When the BE copy starts, the BE gets about 87%
of the 10 Mbps bandwidth and QBSS drops to about 5%.
When the priority copy starts it gets about 70%, BE gets about 19% and
QBSS about 1%. As the priority and later the BE copies are stopped
the reverse happens. Each time things appear to happen very quickly
with no long (we also made measuremenst with 1 second
incremental throughput reports, that indicate the
time to transition from one throughput state to another is
< 1-2 seconds) learning process.
We repeated the measurement with 10 streams of BE copies, to represent 10
competing TCP users. The other parameters were the same as the
previous measurement. In this case we also added up all the throughputs
and plotted them. The behavior is almost identical to the single
stream version. The total throughput stays pretty
constant regardless of the competition.
There are positive and negative spikes in the total
throughput. This is probably since we were using three separate processes
for the file copies, and they were not synchronized.
Again the transitions occur quickly.
We changed the router percentages assigned to QBSS and Priority
copies to 30% and 1% respectively and remeasured with 10 streams of BE
copies. In this case we plotted the throughputs on a log scale to
better show the utilization by the QBSS copy when both other
copies were running. With this graphical representation,
unfortunately it is tricky to spot the priority percentage utilization
since it almost
overlaps the QBSS throughput. It is seen that the QBSS stream
backs off to 20KBytes/sec and 10KBytes/sec when the BE and priority
copies are successively introduced. The BE throughput drops to about 66%
when the priority traffic starts with about 30% utilization. In all cases
the transitions occur quickly.
Effects on ping response time
We used the Cisco 6509/SUP2/MSFC2 based test bed
with GE connected hosts and a 100Mbps bottleneck between the hosts. We used iperf to send
bulk TCP traffic with a 128KByte window and 1 stream
from antonia to pharlap with different TOS bit settings
(using the iperf -S option with values of 32 for QBSS and 40 for Priority traffic). At
the same time we used the Linux ping with the -Q option to set the TOS bits
and default packet (56Bytes) and spacing (1 second). Each of the
tests consisted of starting the iperf transfer and then running ping for 100 seconds
(while iperf was still running). We noted down the min/avg/max standard deviation
of the ping RTTs and the iperf throughput achieved.
We also verified by using tcpdump on antonia that the iperf and ping applications
were setting the TOS bits in the IP headers. Further study with tcpdump on pharlap
of the TOS bits also identified
that the TOS bits were cleared before they reached pharlap. Thus pharlap did not
set any TOS bits in the ping echo resposne or the iperf acks. This
clearing of the TOS bits may be since packets on 100Mbps links are
not trusted so the resetting occurs at the 100Mbps interface on the 2nd 6509 in the
path from antonia to pharlap.
However, since the bulk of the traffic (over 100 to 1) is from antonia
to pharlap we do not believe the lack of setting the TOS bits on the reverse path
will have a major effect on the current results.
The iperf throughputs achieved were about 93Mbits/s. The following results were
- If iperf was not running then the average RTT was about 0.32 +- 0.06 msec regardless
of whether the TOS bits were not set (Best Effort) or set to QBSS or Priority.
- If the iperf traffic was set for QBSS, and ping was set for Best
Effort then the ping average RTT was about 0.55+-.12 msec. or well within a factor
of 2 of ping running with no iperf traffic.
A similar result was
obtained if the iperf traffic was set for QBSS and the ping traffic was set
- If the iperf traffic had the same priority as the ping traffic
(except when both were set to Priority) then the
ping RTT was about 5 +- 1.5msec.
- There was an anomaly in that if the iperf traffic was set to Best Effort
and the pings were set for Priority then the ping RTT was about 5 +- 1.5msec. This
is probably since the Best Effort and Priority traffic share the same queue, and
indicates the limitations to the QBSS configuration we were able to implement.
- If both the iperf and ping traffic were set for Priority then since the
Priority traffic was
limted by ACLs on the switch interfaces to be < 30% the ping RTTs were very
variable depending on whether iperf traffic was currently flowing or limited.
This is shown in
the plot to the right. Looking at the autocorrelation function for the RTTs
there appears to be some periodicity in the longer RTTs occuring with about a 6 second
interval. A frequency histogram of the RTTs indicates that there is a peak containing
about 75% of the RTTs from about 247 - 397 usec. after which there is an almost flat tail
out to about 2.40 msec. which (including the peak) contains 99% of the data.
A representative time series of the ping RTTs for ping and iperf both having QoS set to
Priority is seen below.
Revised August 9, 2001.