from Hacker News

Why Understanding Software Cycle Time Is Messy, Not Magic

by SiempreViernes on 6/7/25, 9:03 PM with 32 comments

  • by tmnvdb on 6/8/25, 3:42 AM

    I've never encountered cycle time recommended as a metric for evaluating individual developer productivity, making the central premise of this article rather misguided.

    The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort. This systemic perspective is fundamental in Kanban methodology, where cycle time and its variance are commonly used to forecast delivery timelines.

  • by dgfitz on 6/8/25, 5:46 AM

    My current org can have a cycle time on the order of a year. Embedded dev work on limited release cadence where the Jira (et. al.) workflow is sub-optimal and tickets don’t get reassigned, only tested, destroys metrics of this nature.

    If this research is aimed at web-dev, sure I get it. I only read the intro. Software happens outside of webdev a lot, like a whole lot.

  • by resource_waste on 6/8/25, 10:46 AM

    A thank you to HN who told me to multiply my estimates by Pi.

    To be serious with the recipient, I actually multiply by 3.

    What I can't understand is why my intuitive guess is always wrong. Even when I break down the parts, GUI is 3 hours, Algorithem is 20 hours, getting some important value is 5 hours... why does it end up taking 75 hours?

    Sometimes I finish within ~1.5x my original intuitive time, but that is rare.

    I even had a large project which I threw around the 3x number, not entirely being serious that it would take that long... and it did.

  • by SiempreViernes on 6/7/25, 9:04 PM

    > We analyze cycle time, a widely-used metric measuring time from ticket creation to completion, using a dataset of over 55,000 observations across 216 organizations. [...] We find precise but modest associations between cycle time and factors including coding days per week, number of merged pull requests, and degree of collaboration. However, these effects are set against considerable unexplained variation both between and within individuals.
  • by wry_durian on 6/8/25, 9:46 AM

    Cycle time is imprtant, but three problems with it. First, it (like many other factors) is just a proxy variable in the total cost equation. Second, cycle time is a lagging indicator so it gives you limited foresight into the systemic control levers at your disposal. And third, queue size plays a larger causal role in downstream economic problems with products. This is why you should always consider your queue size before your cycle time.

    I didn't see these talked about much in the paper at a glance. Highly recommend Reinertsen's The Principles of Product Development Flow here instead.

  • by duncanfwalker on 6/8/25, 4:22 PM

    > Comments per PR [...] served as a measure to gauge the depth of collaboration exhibited during the development and review process.

    That sounds like a particularly poor measure - it might even be negatively correlated. I'm worked on teams that are highly aligned on principles, style and understanding of the problem domain - they got there by deep collaboration - and have few comments on PRs. I've also seen junior devs go without support and be faced with a deluge of feedback come review time.

  • by tangotaylor on 6/9/25, 3:03 AM

    My favorite findings:

    * Fig 2b: the cycle time drops slightly around June and July. I have no idea why this is but it's amusing.

    * Fig 3: more coding days has very diminishing returns on cycle time. E.g. from eyeballing the graph, a 3x increase in the number of days per week spent coding (from 2 days to 6 days) only has a ~25% boost to cycle time.

    * Fig 7: more comments on a PR means vastly slower cycle time. I can personally attest to this as some controversial PRs that I've participated in triggered a chain reaction of meetings and soul searching.