Why 100Mbps Does Not Mean 100Mbps Transfer Rates
You will not always see 100Mbps upload/download speeds even with a 100Mbps port. Much of the slowdown occurs because as packet travel distance increases, so does latency, which has a large detrimental effect on large file transfers. For smaller files, like those associated with not-too-graphical web pages, this has less impact. Without getting too technical, this is because file transfer protocols that use TCP require that the recipient respond with confirmations of data received, and this is one reason that file transfers over longer distances are slower, in direct proportion with the increase in response times.
See http://www.internetworkexpert.org/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/ for a more in-depth discussion on this.
Most download accelerators are able to increase transfer rates by simply employing multiple TCP pipes that dump into the same file. This doesn’t solve the TCP window size problem, but takes advantage of what the uplink is capable of handling. Most modern browsers do this automatically, so download accelerators are really not a necessity any more.
You may wish to optimize your per-TCP connection transfer rates though. To do so, determine your optimal TCP window size based on the expected latency of your most bandwidth intense client-base (see the calculator at the above link). Then, based on that, adjust your TCP/IP stack to adjust below:
To tweak Windows 2008 TCP Window Scaling, please refer to the following:
Note that Windows 2008 doesn’t allow you to tweak settings like 2003 did. You can make the system adjust it “more aggressively,” but you can’t hard code numbers in.
To tweak Windows 2003 TCP Window Scaling, please refer to the following:
You may wish to also try: http://www.speedguide.net/tcpoptimizer.php
To tweak Linux TCP Window Scaling, please refer to the following:
Note that many other factors come into play for bandwidth calculation. In a hosting environment, your server must compete with other servers in the data center to reach the core routers and from there, must concentrate in various nodes and exchanges to reach a packet’s destination. Along the way, routers must prioritize and queue packets for transmission. We can check the health of this process by performing a traceroute between “slow links.” Network congestion at any one of these nodes can reduce overall transfer rate. On either one of the endpoints, disk I/O, or other system stress may be a bottleneck.
All in all, an 100Mbps, or even an 1000Mbps uplink will not provide transfer rates greater than what the network fabric in between the source and destination can handle, and not greater than what the server / client can negotiate within the TCP pipe.
#18 Feb 2010 – Edited for spelling/grammar.
#24 Mar 2010 – Updated link for 2008 tuning.