from Hacker News

CircleCI says hackers stole encryption keys and customers’ source code

by kuter on 1/15/23, 1:57 AM with 147 comments

  • by ferminaut on 1/15/23, 4:28 AM

    There shouldn't be any coming back from this. There are failures on multiple levels here & CircleCI demonstrated no one should keep any sensitive data with them.

    > Zuber said that while customer data was encrypted, the cybercriminals also obtained the encryption keys able to decrypt customer data.

    ...what? why did this engineer have access to everything? Does CircleCI know what minimum access policies are for?

  • by pcblues on 1/15/23, 3:00 AM

    "some" means "all" from Australia's recent intrusions (Optus, Telstra). They can say "some" when the people at the top responsible for reporting only want a sample. It's PR.
  • by TobyTheDog123 on 1/15/23, 4:10 AM

    I honestly can't think of a worse result of a hack of a CI/CD service than source code of companies being stolen. In my mind, this is akin to the Okta breach a while back, a ton of companies being hit hard through no fault of their own all at once.

    I can appreciate the want to diversify services so that secrets/env are separate from code, but I think I would honestly trust the behemoth that is Github with both.

    That being said, my company still uses Okta, so freebies and mulligans are certainly still tolerated when it comes to data breaches.

  • by alfalfasprout on 1/15/23, 7:24 AM

    Laptop security aside (this is a hard problem and good solutions can often be detrimental in other ways) there should have been way, way more auditing around access to customer repos. The fact that it took so long to both mitigate further access and to understand the rough scope of the hack is concerning.

    More broadly... it shouldn't be that easy to get encryption keys to everyone's secret env variables used for CI jobs.

  • by fareesh on 1/15/23, 3:52 PM

    Joke's on them my source code is terrible
  • by c3534l on 1/15/23, 3:51 PM

    Seems like everyone gets hacked eventually. Like, I'm sure CircleCI had security experts they hired. I don't doubt that they took things seriously and made sure they followed best practices. But that's not good enough. You will still get hacked. What do we do about this?
  • by foota on 1/15/23, 4:59 AM

    Would hardware security keys protect from this? If you already have a session token on a site (and that site doesn't somehow restrict the session token to only being used on the machine that generated it? Which afaik isn't possible) then it's too late, yes?
  • by fexecve on 1/15/23, 5:36 AM

    The sad part is, the damage this does to companies won't be felt for years (which is how long it'll take someone to take the stolen source code, analyze it, and make a convincingly-distinct clone), so companies will think that nothing came of this, and they'll keep using CircleCI (and other similar platforms which put everyone's eggs in the same basket, how appealing to hackers that must be).
  • by StopHammoTime on 1/15/23, 8:26 AM

    How could a stolen session token even be useful. I have to log into tools every day, if I change IPs at all I have to re-authenticate, and all prod access needs to be approved and has a finite lifespan.

    How could a CI company be that negligent. They should be leading this stuff from a best practice point of view.

  • by srazzaque on 1/15/23, 12:09 PM

    Whilst slightly off-topic, curious if they published, or if anyone knows, what OS the compromised employee machine was running?
  • by oxfordmale on 1/15/23, 10:22 AM

    I worked for an anti-virus company. There are tools that check if your malware can avoid detection by the major virus scanners. As such, the recommendation is never to rely on a virus scanners alone to protect critical assets.
  • by bamboozled on 1/15/23, 11:46 PM

    I think the CTO has played it pretty well, his recent blog post is kind of "transparent" enough to sound like they care, but very quick to rush everyone back to normality with a kind of "nothing to see here" attitude.

    "Thanks customers for the support" is almost a patronizing thing to say IMO. They should at least offer compensation financially for this and as others have said, his recent update has left more questions unanswered for me.

    The way I see it, I'm done as a customer, just need the time to migrate away.

  • by debarshri on 1/15/23, 9:34 AM

    What is interesting here is CircleCI is SOC2 Type 2 compliant. The whole narrative changes if CircleCI was only a self hosted solution and the hack would have happened by one of the customer employees. I'm sure no one would have blamed CircleCI. I dont know if this employee had remote access to all the self hosted enterprise customers too, then that's true lapse on CircleCIs part.
  • by jay-barronville on 1/15/23, 7:02 AM

    I hate to be that guy but this news highlights some blatantly incompetent security protocols (especially key management) by a company that we should expect better from. Even something as simple as a Vault (HashiCorp) cluster with decentralized key shares would’ve prevented this. I’m really disappointed in CircleCI. There’s no way I’d trust them after this.
  • by heartbreak on 1/15/23, 3:12 AM

    There’s nothing in this article that says customer source code was accessed or stolen. Is that an error with the title?
  • by llIIllIIllIIl on 1/15/23, 6:27 AM

    Oh man, I feel so lucky that I've switched exclusively to Github actions late 2020, no good news from CircleCI since then.
  • by komuW on 1/15/23, 9:36 AM

    From the circleCI blogpost[1]: "Our investigation indicates that the malware was able to execute session cookie theft, enabling them to impersonate the targeted employee in a remote location"

    I haven't seen much discussion on how this specific attacker entrypoint can be mitigated. So I'm going to make a naive attempt in this comment.

    How about storing the client's IP address in the session cookie. Then whenever the server recieves the cookie, it compares the client's IP address against the one stored in the session cookie. The server denies the login if there's a mismatch. The cookie would of-course have to be signed(hmac etc) so that it is tamper proof.

    One problem with this is that client IP addresses are easily spoofed[2].

    So, instead of storing the client's IP address; how about we instead store the clients' SSL fingerprints[3][4]. I haven't looked much into the literature, but I think those fingerprints are hard to spoof.

    1. https://circleci.com/blog/jan-4-2023-incident-report/

    2. https://adam-p.ca/blog/2022/03/x-forwarded-for/

    3. https://github.com/salesforce/hassh

    4. https://github.com/salesforce/ja3

  • by avereveard on 1/15/23, 7:08 AM

    So from exploit to reaching the news it took almost a month. That's a large window of opportunity.
  • by blntechie on 1/15/23, 8:01 AM

    This is a super critical hack, looks really bad for CircleCI. But I don't see any negative news much elsewhere. They will move on.

    Also, maybe that localised build system running on an old server for each team seem to be not a bad idea to reduce the blast radius when eventually a hack happens. These providers are supposed to be the gatekeepers and experts who one leaves the tedious and critical work to. If they are just being a leaky cauldron, maybe not bad to cook in my old pot at home.

  • by jwilk on 1/15/23, 10:02 AM

  • by portoal on 1/15/23, 2:34 PM

    What operating system is running on that malware-d laptop ? 90% chance it's Windows 10 ?
  • by marsupialtail_2 on 1/15/23, 4:11 AM

    If you make everything open source...
  • by athul_jayaram on 1/15/23, 9:49 AM

    A hack of a CI/CD (Continuous Integration/Continuous Deployment) service can have a significant impact on the companies that use it. In this scenario, an attacker would gain unauthorized access to the CI/CD service's servers, potentially stealing sensitive information such as source code for various companies' software projects. This type of incident is similar to the Okta breach