from Hacker News

Why we shift testing left: A software dev cycle that doesn’t scale

by serverlessmom on 5/24/24, 2:54 PM with 51 comments

  • by dboreham on 5/24/24, 3:42 PM

    I'm not sure how this is different than TDD, but at the end it hits on a key point, something I've called "ROAC" (Runs On Any Computer):

    "there was often a way to run the whole system, or a facsimile, on your laptop with minimal friction"

    Any time the code you're developing can't be run without some special magic computer (that you don't have right now), special magic database (that isn't available right now), special magic workload (production load, yeah...), I predict the outcome is going to be crappy one way or another.

  • by MeetingsBrowser on 5/24/24, 3:34 PM

    > There’s a cynical read of “shift left” that is “make people do more work but pay them the same.”

    I haven't heard of this. I thought shifting left was the new hotness. I was under the impression that finding and fix mistakes as soon as possible is considered best practice.

    > Shifting Testing Left Is a Return of an Older, Better System

    I don't really understand. I thought the "old way" was to wait until everything was done to check if it works, and the "new way" was to add tests for individual components as you go, making sure nothing breaks as new features are added.

  • by taneq on 5/24/24, 3:36 PM

    I'm confused by this idea that 'testing' is a single thing that you can 'shift'. Testing is part of each step of envisaging, specifying, designing, implementing, shipping, supporting, updating, maintaining (and I'm sure I've missed a few) a product.
  • by Kinrany on 5/24/24, 3:23 PM

    What does this have to do with microservices? The article is a mess.
  • by kelsey98765431 on 5/24/24, 4:01 PM

    The author poses a very real and valid question but posits no solution and frames this as a binary.

    The question is when to test.

    There is benefit and tradeoff when you decide when and how to test your application. Sometimes you simply cannot test beforehand, such as when you are integrating with a live system with a cost of replicating the environment very very high due to this being a managed solution contract with a large cough IBM cough vendor. In these cases often the question is when to mock test. Then when to test. But this is still just how and when.

    With a solo developer, it makes plenty of sense to write the feature then test it. Writing it may be the way to actually discover the behavior you want. With a team this can still be the case. In a larger team you can pass this to QA engineers to take these rapid prototypes and find the issues with them for a different type of developer to fix bugs in.

    The reverse is the same. You can discover your feature behavior by experimenting with tests until the desired behavior is asserted to be occuring through things like type validation and object patching for instrumentation, or whatever profiling or debugging tools and test suites you like.

    The hard answer is there is no one answer. Just like iterative development urges us to do our best to keep up by iterating on our features, we should iterate on our tests. Do not worship coverage, use it as a tool. Do not fear testing, it's just code afterall so you can actually make a full application that is just a series of tests (experiments) that run and if everything has passed, well you just wrote the same thing as a bunch of conditional checks in your library code.

    Do what works for you, your team, the project you are on and where you are at. If youre struggling add some more tests. If you have so many tests failing you dont know what to do, write some more code and string your tests together with new functions. It gives a nice way to say ok, this is the next step of my iteration. Some like to start by testing some like to finish. I do both. Whatever you do, put love into it and if thats super sturdy prod code that has no tests and will never fail because you are the safest data handler in the world with the most flexible business logic in the world, go for it - but just remember if you someday later forget how things work and dont want to read documentation or experiment with the live system - you can always write tests to remind you how it works.

    Tests are a form of documentation. They may not prove what you think they prove, but they document a fact about your code that is objective as determined by the code itself. This is a wonderful power, use it however you wish.

  • by moomin on 5/24/24, 4:12 PM

    I feel like this is a case of one of those "movements" that is just "Yes, under certain circumstances this might be a good idea; and under others, it won't be". Consider that tests are literally executable specifications of how the system or component should behave (a formulation I associate with Dan North, but I'm sure the idea is far from original). Now the question becomes "should you write a spec every time before you start implementation"? It seems obvious that the answer to that rather depends on how well you actually understand the problem space you're trying to solve.

    And if all the work you're doing is well understood before you start, I suggest finding somewhere your manager you with the hard stuff.

  • by dpflan on 5/24/24, 3:31 PM

    I'm interested in others' experiences with "shifting left": if you are in the process of, are currently doing, are thinking about whether you should, or have and decided to "shift right" again, do you mind sharing here?
  • by dpflan on 5/24/24, 3:32 PM

  • by jf22 on 5/24/24, 5:30 PM

    >This process then becomes more of a waterfall model where dev and QA are distinct, sequential phases.

    I think we overuse the term waterfall. Having a QA phase as part of a development cycle doesn't not make something "waterfall.:

  • by ssfrr on 5/24/24, 4:50 PM

    This is the first time I’ve heard the term “shifting left”. Maybe I missed it, but is this the same as TDD?