cancel
Showing results for 
Search instead for 
Did you mean: 

Speeding Things Up

SuperZoom
Grafter
Posts: 353
Registered: ‎17-05-2013

Speeding Things Up

With the new  80(+? Or, annoyingly minus...)Mbps  fibre broadband products and home gigabit networks becoming common now, I thought some might find this an interesting read on what slows things down. Hard to believe it was written 13 years ago! (When WhereOnEarth was still a separate company.)
It seems highly relevant to all the various issues brought up here all the time.
From http://sd.wareonearth.com/~phil/issues.html
(NB: the bold emphasis is original, not mine.)
Quote
Issues Impacting Gigabit Networks:
Why don't most users experience high data rates?

Phil Dykstra
Chief Scientist
WareOnEarth Communications, Inc.

The United States has several high speed nationwide networks that support Research, Engineering, and Education. These networks should support data rates in excess of 100 Mbps, with many OC3 (155 Mbps), OC12 (622 Mbps), and even OC48 (2.4 Gbps) links. Routine end-to-end data rates approaching 100 Mbps is even a goal of the Next Generation Internet program. Yet most users today see perhaps one tenth of that goal. Why is this, and what should we do to improve the situation?
Recent measurements on the Defense Research and Engineering Network (DREN), vBNS, and Abilene networks have painted a rather grim picture of typical end-to-end performance. There appear to be many obstacles to high data rate flows. We briefly discuss some of them below, along with network design concepts that have become increasingly important as data rates have increased. Several of these concepts come directly from the estimate of TCP throughput:
bps < min(rwin/rtt, MSS/(rtt*sqrt(loss)))


Window Size Matters
Most of our end systems still default to offering ~16KB TCP receive windows (rwin), or even 8KB. These values are fine for high-speed local area networks, and low-speed wide area networks, but they severely limit throughput on high-speed wide area networks. For example, over a coast-to-coast path (rtt = 40 msec), TCP could not exceed about 3.3 Mbps, even if it was running over a gigabit per second path.
The answer is not as simple as setting a large default rwin. Too large of a default window can run your system out of memory, since every connection will use it. A large window can also be bad for local area performance, and bad for some interactive sessions or applications that don't require high data rates. What is needed are tuned applications - ones that use large windows when and where appropriate - and/or adaptive TCP stacks that adjust rwin based on actual use. Web100 is an example of one project that aims to provide an adaptive TCP for the masses. We could do more today to improve typical user performance by improving end system software than we could by installing more high speed links.

Latency Matters
"High speed" networks are really high capacity networks. The "speed" with which data moves down a T1 line, or an OC48 line, are both dictated by the speed of light in the media. Architectural changes in our high performance networks in the past few years have often resulted in increased delay or latency between pairs of sites. Examples include the relatively small number of Network Access Points (NAPs) where networks interconnect, the concentration of sites behind Gigapops, and the reliance on a fairly small number of very high capacity trunks.
On low capacity networks, latency cause by propagation delays wasn't a very critical factor. Today however we see numerous cases where the performance an application sees is directly impacted by the geographic path length of the network. Everything else being equal, TCP can go twice as fast if the path length (latency) is cut in half. Yet our routers today usually choose paths based on minimizing the number of hops, and following the highest capacity path, even if that means routing across the country and back. Many applications will do better over a low latency OC3 path than over a high latency OC48 path, yet we have no way to ask for such a path from the network.
On a single high performance network today, measured latencies are typically ~1.5x - 3x that expected from the speed of light in fiber. This is mostly due to taking longer than line-of-site paths. Between different networks (via NAPs) latency is usually much worse. Some extra distance is required, based on the availability of fiber routes and interconnects, but much more attention should be given to minimizing latency as we design our network topologies and routing.

MTU Matters
Packet size can have a major impact on throughput. The dynamics of TCP are such that, for a given latency and loss rate, there is a maximum packet per second rate than can be achieved. To increase throughput, you have to increase the packet size (or reduce latency or loss, which is something the end systems can't control).
Today the world is rapidly heading to where 1500 bytes is the largest supported end-to-end packet size. This is because of the dominance of ethernet technology, and the use of 1500 bytes even at gigabit data rates. Such small packets are a major obstacle to high performance TCP flows. At one gigabit per second, this equates to over 83000 packets per second, or only 12 microseconds per packet. There is no reason to require such small packets at gigabit data rates.
In the short term, the author hopes that the 9KB "jumbo frame" proposal for gigabit ethernet becomes widespread. In the longer term, we should build high speed networks that can support much larger packet sizes. The backbone links and NAPs are particularly important, because if they restrict MTU, the end systems are helpless. It is hard to overstate the importance of this issue.

Loss Matters
In the old days we thought 10% packet loss was acceptable. After all, TCP does error recovery, and 90% isn't bad, right? Today, many service level agreements (SLAs) target a loss of 1% or less (often averaged over 24 hours). For gigabit data rates however, loss has to be extraordinarily low!

For example, to achieve a gigabit per second with TCP on a coast-to-coast path (rtt = 40 msec), with 1500 byte packets, the loss rate can not exceed 8.5x10^-8! If the loss rate was even 0.1% (far better than most SLAs), TCP would be limited to just over 9 Mbps. [Note that large packet sizes help. If packets were n times larger, the same throughput could be achieved with n^2 times as much packet loss.]
Gigabit networks thus need to be nearly lossless. We believe that one of the reasons that such low loss isn't being observed is because most of today's routers and switches have insufficient buffering for such high bandwidth-delay products. Few of the high performance paths we have studied show stable queueing regions. Also, the concept of depending on loss to indicate congestion to TCP may not apply very well at extreme bandwidth-delay products.

Bugs Are Everywhere (especially duplex ones)
Recent measurements over numerous high performance paths have turned up a wealth of bad behavior, much of which is still unexplained. Slow forwarders, insufficient buffering, strange rate shaping behavior, duplex problems, packet reordering, and low level link or hardware errors are all playing a part. Very few paths can sustain the packet per second data rates that you would expect from the underlying hardware and links.
At least one problem deserves special mention. The failure of ethernet auto-negotiation, and the resulting duplex problems, are perhaps the single biggest performance bug on the internet today. The results of this bug only show up under load which makes them difficult to notice. Low rate pings show almost no loss, but high data rate loads result in dramatic loss. Our tests, and similar reports from others, are indicating that this bug has reached epidemic proportions.

Why Debugging Is Hard
Network test platforms and programs are usually only available at the edges of the network. When end-to-end tests are performed, they span many different devices and links. The result is a messy convolution of all of the behaviors along the path. When bad behavior is observed, it is sometimes nearly impossible to figure out where in the path the problem lies.
Performance problem debugging would be vastly easier if routers provided some kind of high performance testing service. Routers are designed to forward packets well, but are usually very bad at answering traffic directed to them. This means that tests can't be directed at a router in order to debug a path problem hop-by-hop. If you target a router in the middle of the path, you get such poor performance that other problems you are looking for are usually masked. The participation of routers in something like the proposed IP Measurement Protocol (IPMP), and/or a high speed echo service, would greatly aid in debugging.

Security Isn't Helping
The ever increasing security threat, and level of abuse on the internet, has led to numerous measures that decrease performance and make performance measurement and debugging more difficult. ICMP is often blocked making ping and/or traceroute impossible. Deliberate rate limits are sometimes imposed on ICMP or other traffic as a measure to defend against denial of service attacks. Sometimes only a limited number of TCP and UDP port numbers are left unblocked, which can prohibit measurement applications that use other ports. And an increasing number of Firewall and Network Address Translation (NAT) boxes are in the path, creating a loss of end-to-end transparency.
The performance impact of all of these measures has not been well studied. What exactly is the slowdown of different routers given certain kinds of filter lists? How fast do various firewall and NAT devices forward packets under different traffic situations? Can you bypass these security mechanisms for authenticated applications? The use of ICMP for measurements should probably be phased out, but acceptable alternatives to ICMP need to be created.

Measurements Are Easy, Analysis Is Hard
We are doing well today at collecting basic measurements. Projects like AMP, Surveyor, PingER, and RIPE, are gathering a wealth of delay, loss, and route information. What we aren't very good at yet is learning things from all of that data. Major progress could be made from detailed automated analysis of the data: detection of anomalies, correlation of events, high level abstraction of causes. There are several projects working in this direction, but we are just beginning.
4 REPLIES 4
ejs
Aspiring Hero
Posts: 5,442
Thanks: 631
Fixes: 25
Registered: ‎10-06-2010

Re: Speeding Things Up

I would like to disagree about it being highly relevant to all the various issues here. Most of the issues here aren't about poor performance of a 330Mbps FTTP connection or gigabit LANs.
I think modern operating systems do have adaptive TCP/IP networking stacks.
SuperZoom
Grafter
Posts: 353
Registered: ‎17-05-2013

Re: Speeding Things Up

Quote from: ejs
I think modern operating systems do have adaptive TCP/IP networking stacks.

Indeed they do! It's quite interesting to see what's changed and what hasn't in 13 years, isn't it? Ethernet auto-negotiation has improved somewhat as well, I believe.

Does your ADSL line not get aggregated onto PlusNet's gigabit links, then? I didn't realise they offered an ADSL-all-the-way service. Glad to hear the REIN isn't causing any packet loss, though. One has to be thankful for small mercies.
ejs
Aspiring Hero
Posts: 5,442
Thanks: 631
Fixes: 25
Registered: ‎10-06-2010

Re: Speeding Things Up

I forgot about everyone moaning about the performance of Plusnet's network because I haven't noticed as much of it recently. Or I have been concentrating on other issues.
The quoted text about packet loss is highlighting what a 0.1% rate of packet loss would do to the useful speed of a gigabit connection.
See also http://en.wikipedia.org/wiki/Bufferbloat
SuperZoom
Grafter
Posts: 353
Registered: ‎17-05-2013

Re: Speeding Things Up

Yeah, tell me about it. Running and diagnosing performance problems in core networks is certainly a complex business these days which can impact upon hundreds of thousands of people without them even realising how. Indeed, sometimes subscribers end up chasing their tails over the few small things they can measure and control because the rest is just too amorphous. But PlusNet seem to be really on top of things on the network front, fortunately. I was incredibly impressed with what they managed to achieve with the DNS servers in only a few weeks. It's all come a long way from simple medium wave, that's for sure  Cool