from Hacker News

Slack’s Outage on January 4th 2021

by benedikt on 2/1/21, 12:59 PM with 106 comments

  • by dplgk on 2/1/21, 2:26 PM

    > On January 4th, one of our Transit Gateways became overloaded. The TGWs are managed by AWS and are intended to scale transparently to us. However, Slack’s annual traffic pattern is a little unusual: Traffic is lower over the holidays, as everyone disconnects from work (good job on the work-life balance, Slack users!). On the first Monday back, client caches are cold and clients pull down more data than usual on their first connection to Slack. We go from our quietest time of the whole year to one of our biggest days quite literally overnight.

    What's interesting is that when this happened, some HN comments suggested it was the return from holiday traffic that caused it. Others said, "nah, don't you think they know how to handle that by now?"

    Turns out occam's razor applied here. The simplest answer was the correct one. Return-from-holiday traffic.

  • by kparaju on 2/1/21, 3:45 PM

    Some lessons I took from this retro:

    - Disable autoscaling if appropriate during outage. For example if the web server is degraded, it's probably best to make sure that the backends don't autoscale down.

    - Panic mode in Envoy is amazing!

    - Ability to quickly scale your services is important, but that metric should also take into account how quickly the underlying infrastructure can scale. Your pods could spin up in 15 seconds but k8s nodes will not!

  • by jrockway on 2/1/21, 5:25 PM

    The thing that always worries me about cloud systems are the hidden dependencies in your cloud provider that work until they don't. They typically don't output logs and metrics, so you have no choice to pray that someone looks at your support ticket and clicks their internal system's "fix it for this customer" button.

    I'll also say that I'm interested in ubiquitous mTLS so that you don't have to isolate teams with VPCs and opaque proxies. I don't think we have widely-available technology around yet that eliminates the need for what Slack seems to have here, but trusting the network has always seemed like a bad idea to me, and this shows how a workaround can go wrong. (Of course, to avoid issues like the confused deputy problem, which Slack suffered from, you need some service to issue certs to applications as they scale up that will be accepted by services that it is allowed to talk to and rejected by all other services. In that case, this postmortem would have said "we scaled up our web frontends, but the service that issues them certificates to talk to the backend exploded in a big ball of fire, so we were down." Ya just can't win ;)

  • by danw1979 on 2/1/21, 5:25 PM

    So many fails due to in-band control and monitoring are laid bare, followed by this absolute chestnut -

    > We’ve also set ourselves a reminder (a Slack reminder, of course) to request a preemptive upscaling of our TGWs at the end of the next holiday season.

  • by Thaxll on 2/1/21, 3:30 PM

    Surprised that there is just a few metrics available for TGW: https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-c...

    Probably the only way to see a problem is if you have a flat line for bandwidth, but as the article suggested they had packet drop wich does not appear on the cloudwatch metrics, aws should add those metrics imo

  • by keyle on 2/1/21, 11:51 PM

    Didn't we just read a story about the exact same issue?

    Traffic picked up heavily on some website or app, AWS didn't auto-scale fast enough or at all and the very systems that are designed to be elastic just tumbled down to a grinding halt?

  • by jeffbee on 2/1/21, 7:06 PM

    Why don't they mention what seems like a clear lesson: control traffic has to be prioritized using IP DSCP bits, or else your control systems can't recover from widespread frame drop events. Does AWS TGW not support DSCP?
  • by bovermyer on 2/1/21, 2:30 PM

    I really enjoy reading these write-ups, even if the causal incident is not something I enjoy.
  • by fullstop on 2/1/21, 2:07 PM

    I was kind of surprised to see that they are using Apache's threaded workers and not nginx.
  • by gscho on 2/1/21, 2:51 PM

    > our dashboarding and alerting service became unavailable.

    Sounds like the monitoring system needs a monitoring system.

  • by sargun on 2/1/21, 2:14 PM

    I wonder why Slack uses TGW instead of VPC peering.
  • by grumple on 2/2/21, 2:40 AM

    Automated scaling has been a persistent problem for me, especially if I try to scale on simple metrics, or even worse (in Slack's case) on metrics that could potentially compete. The situations in which multiple metrics could compete are sometimes difficult to conceive, but it will always happen if you aren't performing something more sophisticated than "up if metric > value, down if < value" for multiple metrics. I think you've got to combine these somehow into a custom metric and scale just on one metric. I'm totally unsurprised to see that autoscaling failed for both Slack and AWS in this case.

    I think you really have to look at metric-based autoscaling and say: is it worth the X% savings per month? Or would I rather avoid the occasional severe headaches caused by autoscaling messing up my day? Obviously this depends on company scale and how much your load varies. I'd rather have an excess of capacity than any impact on users.

  • by tobobo on 2/1/21, 3:50 PM

    The big takeaway for me here is that this “provisioning service” had enough internal dependencies that they couldn’t bring up new nodes. Seems like the worst thing possible during a big traffic spike.
  • by nickthemagicman on 2/1/21, 2:11 PM

    Wow. The trail leads back to AWS. Wasn't there a number of other companies that were down around that same time or was that a different time?
  • by ianrw on 2/1/21, 3:18 PM

    I wonder if you can pre-warm TGWs like you can ELB? It would be annoying to have to have AWS prewarm a bunch of you stuff, but it's better than it going down.
  • by plaidfuji on 2/2/21, 2:03 AM

    Maybe I’m being naive, but what I’m curious about is: how did their whole team communicate through this triage? I assume not Slack?
  • by nhoughto on 2/1/21, 10:28 PM

    Never seen Transit Gateway, I assume they wouldn't have this problem if it was just single VPC or done via VPC peering?
  • by johnnymonster on 2/1/21, 6:47 PM

    TL;DR outage caused by traffic spike from people returning to work after the holiday.
  • by jeffrallen on 2/1/21, 4:41 PM

    tldr: "we added complexity into our system to make it safer and that complexity blew us up. Also: we cannot scale up our system without everything working well, so we couldn't fix our stuff. Also: we were flying blind, probably because of the failing complexity that was supposed to protect us."

    I am really not impressed... with the state of IT. I could not have done better, but isn't it too bad that we've built these towers of sand that keep knocking each other over?