from Hacker News

Speeding Up Our Build Pipelines

by pasxizeis on 8/23/19, 7:40 AM with 24 comments

  • by plicense on 8/25/19, 10:55 PM

    To summarize:

    1. Your build pipeline has a lot of hermetic actions.

    2. To speed it up, you execute these actions remotely on isolated environments, cache the results and reuse when possible.

    Pretty neat.

    You might want to look into https://goo.gl/TB49ED and https://console.cloud.google.com/marketplace/details/google/... if you need a managed service to do just that.

  • by peterwwillis on 8/26/19, 4:08 AM

    I suggest a documentation cleanup. The initial README should have blurbs about who should use it, what it's for, how it does it, and links to example use cases. A quick start guide steps a user through accomplishing a simple task, and links to extended documentation. Extended documentation is the reference guide to the latest code, and should be generated from the code. I would not suggest splitting documentation up into multiple places (a readme here, a lengthy blogpost there, plus discombobulated Wiki); all documentation should be accessible from a single portal, with filtering capabilities (search is incredibly difficult to make accurate, whereas filtering is easy and effective).

    This whole solution seems like a very custom way to use docker. You can already create custom Docker images with specific content, use multi-stage builds to cache layers, split pipelines up into sections that generate static assets and pull the latest ones based on a checksum of its inputs, etc. I think the cost of maintaining this solution is going to far outweigh that of just using existing tooling differently.

  • by rossmohax on 8/25/19, 7:04 PM

    in Docker that would be something like

      COPY Gemfile Gemfile.lock /src
      RUN bundle install
    
    even if they don't use docker to run application in prod, it can be [ab]used to perform efficient build layer (build step) caching and distribution.
  • by nstart on 8/26/19, 4:35 AM

    Curious what the HN community feels is a "slow deploy". I scanned the article first to find time reductions and still couldn't see how much time was actually taken at the end of it.

    11 minutes is a great time reduction. (11*30 builds a day = 5.5 hours saved in total).

    But I still am not sure what constitutes as a slow builds. I assume at some point there's an asymptotic curve of diminishing returns where in order to shave off a minute, the complexity of the pipeline increases dramatically (caching being a tricky example). So do y'all have any opinions on what makes a build slow for you?

  • by arenaninja on 8/26/19, 9:20 PM

    The first point isn't so much a change in build pipelines as much as it is avoiding the build pipeline altogether and deploying prebuilt artifacts; I can't think of a reason to re-run your build for prod if you have run it for another environment already. In other words recognizing that deployment and build stages are different.
  • by siscia on 8/25/19, 8:42 PM

    It's actually touch a point very close to my work.

    We definitely need much more speed in running our pipeline.

    The software is mostly C/C++ with a lot of internal dependencies.

    Do you guys have any experience in that?

    What is worth the complexity and what is not?

  • by danielparks on 8/25/19, 10:15 PM

    This is basically parallel-make as a service.

    This has been an increasingly difficult problem as more and more pipelines move to containers for testing and building. What other solutions have folks come up with?

  • by deboflo on 8/26/19, 12:45 AM

    JavaScript bundles are often a bottleneck in web builds. I wish there were better ways to speed this up.