from Hacker News

Increasing wireless network speed by 1000%, by replacing packets with algebra

by o1iver on 10/23/12, 8:29 PM with 42 comments

  • by tomstokes on 10/23/12, 9:28 PM

    Summary: TCP throughput drops dramatically when packet loss is present. This technique uses forward error correction to compensate for packet loss, resulting in higher effective throughput over lossy links. Many WiFi and cellular connections are lossy, so this would be helpful in those cases.

    They haven't improved the underlying link rate at all. In fact, the FEC overhead is going to reduce the effective link rate. However, in some edge-case high packet loss scenarios, the reduced packet loss will more than make up for the reduced effective link rate.

  • by wmf on 10/24/12, 12:47 AM

    Google, don't speculate. The primary source appears to be http://www.mit.edu/~medard/papers2011/Network%20CodingMeets%... and similar papers from the same authors.
  • by guelo on 10/23/12, 11:21 PM

    Ugh, remember when academia helped create the internet by releasing free publicly available unlicensed RFCs?

    Edit: Not RFPs, duh

  • by oconnor0 on 10/23/12, 9:28 PM

    See the original discussion & better article at https://news.ycombinator.com/item?id=4686743
  • by zackbloom on 10/23/12, 9:55 PM

    TL;DR They are using error correcting codes across packets to limit the impact of losing some.
  • by Inufu on 10/24/12, 9:56 AM

    This looks like Fountain codes: http://en.wikipedia.org/wiki/Fountain_code

    Basically, you split your data into blocks, XOR random blocks together and the client can recreate the data by solving the equation of which blocks where XORed with which.

    A good tutorial is here: http://blog.notdot.net/2012/01/Damn-Cool-Algorithms-Fountain...

    And a fast implementation: http://en.wikipedia.org/wiki/Raptor_code

  • by stephengillie on 10/23/12, 9:24 PM

    The linkbaity title links to a neat, proprietary way to reduce wireless packet loss.
  • by potkor on 10/24/12, 12:27 PM

    It's the job of the 802.11 L2 to hide correctable packet loss L3 so most radio link loss isn't passed on to TCP. This could just as well read "the wifi L2 error correction doesn't try hard enough".

    This problem is very common but it wants fixing on the L2 and not TCP. Turning up the FEC on the L2 would reduce its capacity even further though since more of the bandwidth is taken up by the FEC (and so does this TCP level FEC).

    3G gets it wrong on the other extreme, it pretends to always have 0% packet loss, your packets just sometimes show up 30-100 seconds late and in order.

  • by delinka on 10/24/12, 2:45 AM

    I see lots of comments talking about FEC. That's not how the article reads to me. Granted the author (or I?) may be completely out in left field, but here's my take on what it says:

    Let's suppose you have a mathematical process that outputs a stream of [useful] data. The description of the process is much, much smaller than the output. You can "compress" the data by sending the process (or equation) instead. Think π. Do you transmit a million digits of π or do you transmit the instruction "π to a million digits"? The latter is shorter.

    Now, reverse the process: given an arbitrary set of data, find an equation (or process) that represents it. Not easy for sure. Perhaps not possible. I recall as a teenager reading an article about fractals and compression that called on the reader to imagine a fractal equation that could re-output your specific arbitrary data.

    If I've totally missed the article's point, please correct me, but explain why it also talks about algebra.

    EDIT: I re-read and noticed this: "If part of the message is lost, the receiver can solve the equation to derive the missing data." I can see the FEC nod here.

    Guh. I guess I'm blind tonight. "Wireless networks are in desperate need for forward error correction (FEC), and that’s exactly what coded TCP provides." I cannot for the life of me understand why they'd need to keep this a secret.

  • by mikepurvis on 10/24/12, 12:41 AM

    There's a company in Ottawa implementing a similar idea, but based on carving up packets and then inserting one or more redundant XOR packets (RAID-style). Their name for it is IPQ: http://liveqos.com/products/ipq/

    They have a patent on this: http://www.google.com/patents/US20120201147

    (Disclosure: I was an intern there in 2009, when it was IPeak Networks.)

  • by Jabbles on 10/24/12, 10:31 AM

    Has anyone done an experiment to see what simply duplicating every TCP packet sent over wireless does? If you're in a situation where you're limited by random packet loss and not by raw bandwidth, I imagine it could help...

    Obviously this is a much weaker and less efficient solution that what is proposed in the paper, but this would be trivial to implement. I believe netem allows you to simulate this.

  • by leif on 10/24/12, 1:45 AM

    Is this likely to be any different from introducing a little error-correction?

    Also, how were we not doing this already?

    Also, I need a writer. Whoever wrote this up made it sound WAY cooler than when I explain error correcting codes.

  • by lovamova on 10/23/12, 9:29 PM

    This is one of those posts where the comments on the site are better than the article submitted.

    It also means less power consumption on mobile phones. There will be no need to increase signal power to get better speed or voice quality.

  • by dnos on 10/24/12, 1:57 AM

    I'm not up-to-date on networking technologies, but it's surprising to me that some sort of error correction hasn't already been made a standard yet.

    I wonder if something along the lines of old-school Parity files would work in the packet world? Basically just blast out the packets and any that were lost, you just reconstruct using the meta-data sent with the other packets.

  • by codex on 10/23/12, 11:55 PM

    Isn't the downside of FEC encoded packets increased latency? Instead of sending each packet immediately, don't you need to accumulate n packets to encode as a group? Or does the math allow incremental encoding? Simple parity is incremental, but the FEC on DSL lines always added 40ms of latency.
  • by codex on 10/24/12, 5:15 PM

    What's funny here is that wireless networks already use FEC at the physical layer. This just adds more, less conservative, FEC higher up for apps where it makes more sense to reduce throuhput and increase some average case latency to avoid worse case latency and worse case throughput.
  • by liamzebedee on 10/24/12, 6:45 AM

    My friend did something similar to the concepts in this article, except he applied it to compression of audio. It was basically finding patterns in the audio and transforming these into equations. Interesting stuff.
  • by Schwolop on 10/24/12, 6:12 AM

    What a remarkably easy to read article.
  • by Snapps on 10/23/12, 10:59 PM

    Captivating title...