by bstrong on 11/26/10, 2:33 PM with 67 comments
by sh1mmer on 11/26/10, 8:10 PM
This topic was covered really well by Amazon's John Rauser at Velocity Conf: http://velocityconf.com/velocity2010/public/schedule/detail/...
To address the points in the conclusion:
1. Fast is good. Fast is also profit.
2. The net-neutrality argument here is totally bogus, anyone that knows how can up their slow-start window today if they choose to. There doesn't really have anything to do with traffic shaping.
3. Google have been using their usual data driven approach to support their proposal for IETF. We need a lot more of that. It's great. The only way we can really find out how the Internet in general will react to changes like this is to test them in some real world environment.
4. I agree, slow-start is a good algorithm with a very valid purpose. The real problem here is that the magic numbers powering it aren't being kept inline with changes to connectivity technology and increases in consumer/commercial bandwidth.
by ig1 on 11/26/10, 3:34 PM
While it's not that big a deal if your users are local to you, if they're on a different continent each extra roundtrip can easily add 100ms.
I used to do TCP/IP tuning for low latency trading applications (sometimes you need to use a third party data protocol so can't just use UDP), this sort of stuff used to bite us all the time.
If latency is important it is worth sitting down with tcpdump and seeing how your website loads (i.e how many packets, how many acks, etc.) as often there are ways of tweaking connection setting (either via socket options or kernel settings) that can result in higher performance.
(Try using tcp_slow_start_after_idle if you're using a recent linux kernel; this won't give you a bigger initial window, but it means once your window size has grown it won't get reset straight away if you have a gap between data sends)
by Pahalial on 11/26/10, 4:24 PM
No, no it's not. This has nothing to do with network neutrality; it's a purely server-side change/fix. Not only that, they're benefiting users without requiring anyone else to change while they wait for standards bodies to catch up. This is a similar scenario to HTML5 video, and distinctly more clear-cut than e.g. '802.11n draft' wireless routers in my opinion.
by ajb on 11/26/10, 5:12 PM
by arturadib on 11/26/10, 4:35 PM
Unless you are serving static content only (in which case you are hardly creating an "app"), the milliseconds you might save with TCP-level optimizations are peanuts in comparison to the multiple seconds your database and computations will be requiring.
by necro on 11/26/10, 5:24 PM
by matthiasl on 11/26/10, 9:06 PM
I tried repeating the experiment. I'm in Sweden, so, annoyingly, a request to google.com redirects to google.se. If I send my request directly to google.se, I get 9k response in 130ms and the initial window looks like 4 to me, i.e. I can't see anything unexpected happening.
I then tried repeating on Amazon EC2. I can't see anything unexpected there either, but the RTT from EC2 to google is only about 3ms, which means I can't assume that the ACKS don't get there.
(The original article author looks at how long the initial 3-way handshake takes and then assumes that all packets take that long, or, probably, half as long, i.e. he assumes that ACKS sent up to one RTT before a packet from google can't have arrived at google in time to affect that packet)
Can anyone else reproduce the experiment?
Other ideas: repeat from Sweden, but send a cookie so that I really get google.com. Repeat from EC2, but make sure I never send any ACKs after the three-way handshake. I'm not curious enough to do the latter, it's a fair bit of work.
by sdizdar on 11/26/10, 9:11 PM
by epi0Bauqu on 11/26/10, 5:27 PM
by jhrobert on 11/26/10, 8:00 PM
According to my own observations, the first 30Ko of my pages seem to be transfered faster then the next 30ko. It is not until much more is sent that the average throughput eventually get up to what it was during the first 30ko.
This is definitely weird.
Note: I am using Ubuntu on EC2 hosted VMs.
As a result, for as much as I can, I try to keep the size of my content below 30ko, using multiple concurrent HTTP requests.
I believe this is related to "slow-start" being pessimistic.
Unfortunately, "slow-start" is not configurable on Linux and I don't feel confident enough to go with some kernel level patch...
Any clue?
by vinutheraj on 11/26/10, 6:21 PM
by bbuffone on 11/28/10, 5:03 AM
http://www.yottaa.com/url/4be004065df8ca5a730001fb/reachabil...
by tlrobinson on 11/27/10, 5:17 AM
Isn't part of that just the network latency? Based on the timestamps for the SYN and SYN-ACK it looks like a RTT of about 16ms.
EDIT: Nevermind.
Request was sent by the client at 00.017437
Request ACK was received by the client at 00.037139
RTT of about 20ms, so the request was received by the server around 00.027
First packet of the response was received by the client at 00.067151
67-27=40. Assuming a latency of 10ms it took 30ms to generate the request.
by fleitz on 11/27/10, 9:19 AM
http://osdir.com/ml/mozilla.devel.netlib/2003-01/msg00018.ht...
by samueladam on 11/27/10, 12:15 PM
http://sites.google.com/a/chromium.org/dev/spdy/An_Argument_...
by bengtan on 11/27/10, 7:25 AM
ip route change default via x.x.x.x dev eth0 initcwnd 6
but please test thoroughly if trying this.
by bemmu on 11/27/10, 3:12 AM
by ergo98 on 11/26/10, 3:16 PM
by iepaul on 11/26/10, 3:29 PM
by phillijw on 11/26/10, 7:10 PM
by d0m on 11/26/10, 3:26 PM