from Hacker News

Code and Trust: Vibrators to Pacemakers

by jackdoe on 7/6/25, 10:15 PM with 43 comments

  • by ramity on 7/10/25, 6:09 AM

    I'll provide a contrasting, pessimistic take.

    > How do you write programs when a bug can kill their user?

    You accept that you will have a hand in killing users, and you fight like hell to prove yourself wrong. Every code change, PR approval, process update, unit test, hell, even meetings all weigh heavier. You move slower, leaving no stone unturned. To touch on the pacemakers example, even buggy code that kills X% of users will keep Y% alive/improve QoL. Does the good outweigh the bad? Even small amounts of complexity can bubble up and lead to unintended behavior. In a corrected vibrator example, what if frequency becomes so large it overflows and leads to burning the user? Youch.

    The best insight I have to offer is that time is often overlooked and taken for granted. I'm talking Y2K data type, time drift, time skew, special relativity, precision, and more. Some of the most interesting and disturbing bugs I've come across all occurred because of time. "This program works perfectly fine, but after 24 hours it starts infinitely logging." If time is an input, do not underestimate time.

    > How do we get to a point to `trust` it?

    You traverse the entire input space to validate the output space. This is not always possible. In these cases, audit compliance can take the form of traversing a subset of the input space deemed "typical/expected" and moving forward with the knowledge that edge cases can exist. Even with a fully audited software, oddities like a cosmic bit flip can occur. What then? At some point, in this beautifully imperfect world, one must settle for good enough over perfection.

    The astute reading above might be furiously pounding their keyboards mentioning the halting problem. We can't even verifiably prove a particular input will provide an output - moreover an entire space.

    > I am convinced that open code, specs and (processes) must be requirement going forward.

    I completely agree, but I don't believe this will outright prevent user deaths. Having open code, specs, etc aids towards accountability, transparency, and external verification. I must express I feel there are pressures against this, as there is monumental power in being the only party able to ascertain the facts.

  • by MangoToupe on 7/10/25, 3:43 AM

    > To be honest, my threshold is quite low, I think in 1 year we will be able to take any piece of code and audit it with `claude`, and I will put my life on the line using it.

    Bold! I wonder what leads to this sort of confidence.

    > Radical transparency is the only answer. I am convinced that open code, specs and procesees must be requirement going forward.

    Yes. Transparency is the only foundation of trust.

  • by tzs on 7/10/25, 4:30 AM

    > This program will vibrate with increase frequency using the Fibonacci numbers 2, 3, 5, 8, 13, 21, 34..

    No it won't. The sequence it follows is 2, 1, 1, 1, ...

    After spotting that I was curious if LLMs would also spot it. I asked Perplexity this and gave it the code:

    > This is Python code someone posted for a hypothetical vibrator. They said it will vibrate with increasing frequency following Fibonacci numbers: 2, 3, 5, 8, ...

    > Does the frequency really increase following that sequence?

    It correctly found the actual sequence, explained why the code fails, and offered a fix.

  • by amaterasu on 7/10/25, 4:48 AM

    I'm trivialising, but a lot of software in medical devices is turning a GPIO pin on/off in response to another pin, then announcing that it did so. The piece missing from the article is that the assumed probability of software/firmware (or anything really) failing is 1.0. Everything is engineered around the assumption that things (_especially_ software) WILL fail and minimising the consequences when they do. LLM's writing the code will happen soon, it's a GPIO pin control after all. LLM's proving the code is as safe as possible and that they have thought about the failure modes will be a while.
  • by mrheosuper on 7/10/25, 3:44 AM

    > How do we get to a point to `trust` it?

    You don't, similar to you don't trust code written by human. The code needs to be audited, line-by-line.

  • by guzik on 7/10/25, 11:42 AM

    Had an interesting encounter recently. Was coming back from a festival and got talking to this woman who mentioned she has to be careful about her health. Turns out she has a pacemaker. Since I'm in the medical device space (much lower risk category though), I was curious about her experience. What struck me was how much she knew about her device like the technical specs, failure modes, battery life, everything. Makes total sense when you think about it (if your life depends on a piece of tech sitting in your chest, you'd probably want to understand it inside out too).
  • by gnfargbl on 7/10/25, 9:36 AM

    I'm surprised not to see a mention of formal methods in the article. I know these are kind of the "nuclear fusion" of Computer Science, but so were neural nets until relatively recently.

    I would have guessed that AI ought to be pretty good at converting code into formally verifiable forms. Is anyone working on that?

  • by fjfaase on 7/10/25, 8:29 AM

    Vibrators might have killed more people than pacemakers, because Whenever you use a vibrator it could put pathogen in places that could lead to a fatal infection or cause cancer. There are many more people using vibrators than that have pacemakers installed and for vibrators there a less strong requirements with respect to safety and proper use.
  • by jackdoe on 7/10/25, 7:09 AM

    an interesting article gibbitz linked in a dup post

    https://www.edn.com/toyotas-killer-firmware-bad-design-and-i...

    > Embedded software used to be low-level code we’d bang together using C or assembler. These days, even a relatively straightforward, albeit critical, task like throttle control is likely to use a sophisticated RTOS and tens of thousands of lines of code.

  • by ulf-77723 on 7/10/25, 11:27 AM

    Great article! I think it always depends which kind of code is being used in different industries. Anything related to your life will need to be guarded by humans. Critical infrastructure, medical devices.

    If I think about anything which might not directly impact human life, AI Code is ok for me.

    The point where the majority will start to trust the generated code like we trust our E-Mail Spam Filter it may be difficult. „The machine will probably do the right thing, no need to interact.“

  • by edwcross on 7/10/25, 7:29 AM

    > really enjoyed reading some real pacemaker code

    Where can one find such code? Is all of it locked under NDAs?

  • by cladopa on 7/10/25, 7:25 AM

    > How do you write programs when a bug can kill their user?

    Engineers had this responsibility for a long time. The Code of Hammurabi talked about what to do when a builder builds a house and it collapses: Put the builder inside a house and make it collapse. Different versions of this method exist today without killing, but ruining the rest of your life anyway. Today if you build a bridge and it collapses, a house and it collapses or it burns, a car or a motorway that provokes accidents.

    A plane that falls down, a toxic detergent, a product that is carcinogenic, a nuclear plant meltdown...

    In the case of a Pacemaker is actually not that hard, it is a very simple device and you will die if you don't use it for sure, so first patients accepted the risk of dying from the pacemaker malfunction gladly against the alternative of not using it and dying for sure.

    There is a methodology for creating things that can't fail, that goes from exhaustive testing to redundancy elements. As an engineer I have studied and applied it. Some times testing can make things ten to twenty times more expensive, like in aviation or medical components.

    There is the option of failing often and learning from mistakes. It is by far the cheapest and fastest option. It was used by early aviators, most of them died testing their machines, but we got amazing planes as a result. It was used in WWII by Americans producing in enormous quantities against much better German machines in smaller numbers. It was used by Soviet Russia to compete against the West spending way less money(and sacrificing people) and it is used today by Elon Musk creating rockets. Thanks to automation we don't need to risk lives while testing, only equipment.

    Now some people criticise Elon for testing autopilot with people: It can kill people!!, they say. But also not using it kills people when someone has not slept well and needs to go to a job meeting and falls asleep at the wheel and kills herself and some other driver on the road.

  • by xvilka on 7/10/25, 9:44 AM

    You can use stricter languages for both, like Rust, for example. Or even stricter like SPARK dialect of Ada. If AI will be able to produce code in these languages, the code will be way more trustworthy.
  • by pragmatic on 7/10/25, 3:59 AM

    Brakes!!

    AI can’t even spell/grammar check this article lol.

  • by imron on 7/10/25, 11:36 AM

    Vibe coding…