from Hacker News

How a little bit of TCP knowledge is essential

by dar8919 on 11/21/15, 5:50 PM with 41 comments

  • by Animats on 11/21/15, 10:50 PM

    That still irks me. The real problem is not tinygram prevention. It's ACK delays, and that stupid fixed timer. They both went into TCP around the same time, but independently. I did tinygram prevention (the Nagle algorithm) and Berkeley did delayed ACKs, both in the early 1980s. The combination of the two is awful. Unfortunately by the time I found about delayed ACKs, I had changed jobs, was out of networking, and doing a product for Autodesk on non-networked PCs.

    Delayed ACKs are a win only in certain circumstances - mostly character echo for Telnet. (When Berkeley installed delayed ACKs, they were doing a lot of Telnet from terminal concentrators in student terminal rooms to host VAX machines doing the work. For that particular situation, it made sense.) The delayed ACK timer is scaled to expected human response time. A delayed ACK is a bet that the other end will reply to what you just sent almost immediately. Except for some RPC protocols, this is unlikely. So the ACK delay mechanism loses the bet, over and over, delaying the ACK, waiting for a packet on which the ACK can be piggybacked, not getting it, and then sending the ACK, delayed. There's nothing in TCP to automatically turn this off. However, Linux (and I think Windows) now have a TCP_QUICKACK socket option. Turn that on unless you have a very unusual application.

    Turning on TCP_NODELAY has similar effects, but can make throughput worse for small writes. If you write a loop which sends just a few bytes (worst case, one byte) to a socket with "write()", and the Nagle algorithm is disabled with TCP_NODELAY, each write becomes one IP packet. This increases traffic by a factor of 40, with IP and TCP headers for each payload. Tinygram prevention won't let you send a second packet if you have one in flight, unless you have enough data to fill the maximum sized packet. It accumulates bytes for one round trip time, then sends everything in the queue. That's almost always what you want. If you have TCP_NODELAY set, you need to be much more aware of buffering and flushing issues.

    None of this matters for bulk one-way transfers, which is most HTTP today. (I've never looked at the impact of this on the SSL handshake, where it might matter.)

    Short version: set TCP_QUICKACK. If you find a case where that makes things worse, let me know.

    John Nagle

  • by barrkel on 11/21/15, 10:53 PM

    This is a general problem of leaky abstractions. If you're a top-down thinker, you're going to have a bad time some day and have a hard time figuring it out.

    OTOH bottom up thinkers take much longer to become productive in an environment with novel abstractions.

    Swings and roundabouts. Top down is probably better in a startup context - it's more conducive to broad and shallow generalists. Bottom up is great when you have a breakdown of abstraction through the stack, or when you need a new solution that's never been done quite the same way before.

  • by jfb on 11/21/15, 10:43 PM

    I really enjoy reading Julia's blog. Not only does she have a real, infectious enthusiasm for learning; not only is the blog well written; but I also often learn a lot. Kudos.
  • by p00b on 11/21/15, 10:07 PM

    John Rauser of pinterest gave a wonderful talk about TCP and the lower bound of Internet latency recently that has a lot in common with what's discussed in the article here. Worth a watch I think if you enjoyed the blog post.

    https://www.youtube.com/watch?v=C8orjQLacTo

  • by PeterWhittaker on 11/21/15, 9:52 PM

    Summary: If you know learn a little, you realize that each packet might be separately acknowledged before the next one is sent. In particular, note this quote: Net::HTTP doesn’t set TCP_NODELAY on the TCP socket it opens, so it waits for acknowledgement of the first packet before sending the second.

    By setting TCP_NODELAY, they removed a series of 40ms delays, vastly improving performance of their web app.

  • by colanderman on 11/21/15, 10:41 PM

    You don't need to entirely disable Nagle; just flash TCP_NODELAY on then off immediately after sending a packet for which you will block for a reply. This way you still get the benefit Nagle brings of coalescing small writes, without the downside.

    (Alternatively, turn Nagle off entirely and buffer writes manually or using MSG_MORE or TCP_CORK.)

  • by dantiberian on 11/21/15, 11:41 PM

    I came across this this week working on the RethinkDB driver for Clojure (https://github.com/apa512/clj-rethinkdb/pull/114). As soon as I saw "40ms" in this story I thought "Nagles Algorithm".

    One thing I haven't understood fully is that this only seems to be a problem on Linux, Mac OS X didn't exhibit this behaviour.

  • by bboreham on 11/23/15, 8:44 AM

    Why wouldn't an http client library turn off Nagle's algorithm by default?
  • by neduma on 11/21/15, 10:14 PM

    Can wireshark/riverbed (application perf tests) profiling help to solve these kind of problems?
  • by rjurney on 11/21/15, 11:44 PM

    In highschool I carried TCP Illustrated around with me like a bible. I cherished that book. Knowledge of networks would eventually be incredibly useful throughout my career.
  • by mwfj on 11/21/15, 9:43 PM

    This can be generalised. It is also one of my favorite ways of doing developer interviews. Do they have a working/in-depth knowledge of what keeps the inter webs running? So many people have never ventured out of their main competence bubble, and that bubble can be quite small (but focused, I suppose).

    For all I know, they believe everything is kept together with the help of magic. I guess I don't trust people who don't have a natural urge to understand at least the most basic things of our foundations.

  • by Ono-Sendai on 11/21/15, 11:49 PM

    This is my proposed solution to this kind of problem: Sockets should have a flushHint() API call: http://www.forwardscattering.org/post/3