from Hacker News

Ask HN: What is the most impressive test suite youve ever worked with

by escot on 5/19/25, 11:30 AM with 3 comments

Did it have near 100% coverage or did it make tradeoffs on what to test?

Did it include UI tests which are notoriously difficult, and if so how did it handle issues like timeouts and the async nature of UI?

Did it have rigid separation of concepts between unit vs integration tests etc, or more fluid?

Could you refactor internal code without changing tests — the holy grail.

  • by MoreQARespect on 5/19/25, 12:08 PM

    The test suite was 90% "end to end" unit tests - no real infrastructure was used it was all faked. Only interactions with the outside world (web client, LLM, database) were tested and all interactions were faked.

    (This is not feasible on every project but it was on this one, database interactions were simple)

    There were a small number (~5%) of slow tests that used a real LLM, database, infrastructure, etc. and a small number of very low level unit tests (~5%) surrounding only complex stateless functions with simple interfaces.

    Refactoring could be done trivially without changing any test code 98% of the time.

    Additionally, the (YAML) tests could rewrite their expected responses based upon the actual outcome - e.g. when you added a new property to a rest api response you just reran the test in update mode and eyeballed the test.

    There was also a template used to generate how-to markdown docs from the YAML.

    Test coverage was probably 100% but I never measured it. All new features being written with TDD/documentation driven development probably guaranteed it.

  • by tsm on 5/19/25, 11:56 AM

    I've found the Metabase test suite[0] to be very good considering it's real-world software written by a for-profit company. Coverage is good, the correct tests usually break when doing a refactor (stuff like "Oh, I thought this change was harmless but actually it breaks the permissions model"), etcetera. But the most important thing is that there's a strong team culture of a) demanding good tests on each PR b) hunting down flaky tests and other sources of friction.

    Another neat thing was that there used to be a full-time SDET who spent a lot of time writing Cypress reproductions for known bugs. When you picked up the bug, you could un-skip the test that was right there waiting for you.

    All that said, of course it's far from perfect!

    0: https://github.com/metabase/metabase/ Backend unit tests are in test/, Frontend unit tests are in frontend/test, end-to-end tests (Cypress) are in e2e.