from Hacker News

Maybe we should be designing for machines too

by artski on 5/18/25, 8:32 AM with 1 comments

  • by artski on 5/18/25, 8:32 AM

    I’ve been thinking a lot about how new features and systems are built lately, especially with everything that’s happened over the past few years. It’s interesting how most of the AI stuff we see in products today is basically tacked on after the fact to trace the trend - some more valuable than others depending on how forced it feels. You build your tool, your dashboard, your app, and then you try to layer in some sort of automation or “assistant” once it’s already working. And I get why - it makes sense when you’ve already got an established thing and you want to enhance it without breaking what people rely on. I did a main writeup in substack about it but figured I'd expand the discussion.

    But I wonder if we’re now at a point where that can’t really be the default anymore. If you’re building something new in 2025, whether it’s a product, internal tool, or even just a feature, maybe it should be designed from the ground up to be usable not just by a human clicking buttons, but by another system entirely. A model, a script, an orchestration layer - whatever you want to call it.

    It’s not about being “AI-first” in the marketing sense. It’s more about thinking: can this thing I’m building be used by something else without needing a human in the loop? Can it expose its core functions as callable actions? Can its state be inspected in a structured way? Can it be reasoned about or composed into a workflow? That kind of thinking, I think, will become the baseline expectation - not just a “nice to have.”

    It’s also not really that complicated. Most of the time it just means thinking in terms of well-structured APIs, surfacing decisions and logs clearly, and not baking critical functionality too deeply into the front-end. But the shift is mental. You start designing features as tools - not just user flows - and that opens up all kinds of new possibilities. For example, someone might plug your service into a broader workflow and have it run unattended, or an LLM might be able to introspect your system state and take useful actions, or you can just let users automate things with much less effort.

    There’s been some early but interesting work around formalising how systems expose their capabilities to automation layers. One effort I’ve been keeping an eye on is the MCP. A quick summary is basically that It aims to let a service describe what it can do - what functions it offers, what inputs it accepts, what guarantees or permissions it requires -in a way that downstream agents or orchestrators can understand without brittle hand-tuned wrappers. It’s still early days, but if this sort of approach gains traction, I can imagine a future where this kind of “self-describing system contract” becomes part of the baseline for interoperability. Kind of like how APIs used to be considered secondary, and now they are the product. It’s not there yet, but if autonomous coordination becomes more common, this may quietly become essential infrastructure.

    I don’t know. Just a thought I’ve been chewing on. Curious what other people think. Is anyone building things with this mindset already or are there good examples out there of products or platforms that got this right from day one?