from Hacker News

Build or don't build a software feature?

by knadh on 3/6/22, 11:24 AM with 42 comments

  • by axg11 on 3/7/22, 2:55 PM

    Interesting visual tool and mental framework. Unfortunately, I think this would have "failed" in all the decision processes I have been a part of. Why? Each of the questions is valid, but bad decisions usually stem from incorrectly answering one of these questions.

    > How valuable is the feature to the target users, truly?

    Take this question for example. Accurately answering it is not always possible. A common mistake is to ask your users or take their word for how valuable they perceive a feature to be. That approach is better than nothing, but can often lead teams astray. This isn't because your users are stupid, it's because your users don't have the perspective that you have in terms of (a) what is possible, and (b) the knock-on effects of the feature on other aspects of the software value proposition.

    Note: the above is _not_ true about bugs. If a new feature is actually a bug/issue fix raised by your users, they are usually right.

    > What is the time, technical effort, and cost to build the feature?

    Estimating technical effort is so difficult that it is an entire field in itself. When working on complex systems, you also have to consider the future complexity introduced when building on top of the new feature (linked to the last question).

  • by chrismorgan on 3/7/22, 2:40 PM

    > This page requires Javascript to function.

    Missed an opportunity to present the “don’t build” reasoning! :-)

  • by mdeck_ on 3/7/22, 2:48 PM

    Sounds like a modified/more complex version of the well-known RICE framework.

    https://www.intercom.com/blog/rice-simple-prioritization-for...

  • by blowski on 3/7/22, 2:24 PM

    This is useful as a way of demonstrating "questions to ask yourself". But how do do you decide whether something is 3/10 or 7/10 for any of these? What might be even better is being able to compare options against each other.
  • by lucideer on 3/7/22, 6:53 PM

    As someone often advocating for not building features, after playing with this a little bit I do think some of the constants* behind it's formula are a little off toward the pessimistic/negative outcome.

    If a very complex feature is of true high value to 90% of my users, it seems uncontroversially worthwhile: the tool gives me "No, but a close call (48%)"

    I'd suggest putting a little more weight on value & user importance and a little less weight on complexity/effort.

    Otherwise, GREAT tool. Even just as an aid to get across the idea that some features should not be built, which is often not understood.

    *for reference, weights are currently as follows:

    # users: 10

    effort: -15

    user importance: 20

    biz complexity: -15

    value: 20

    risk: -20

  • by smoyer on 3/7/22, 3:54 PM

    This is a cool tool but it should differentiate between "users" and "customers" so that weighting is based on the potential for making paying users happy (or perhaps the word "user" should just be changed.) It also appears that these sliders are equally weighted but I find that these factors are NOT equally weighted based on a particular feature, the lifecycle of the product and the lifecycle of the company.
  • by moasda on 3/6/22, 9:35 PM

    Interesting approach for a decision calculator.

    Why is the result always between 5% and 95%?

  • by laurent123456 on 3/7/22, 1:37 PM

    I'm not sure it would be useful to decide if something should be implemented or not (in the sense that you often already know), but it's a great visual tool regardless, and could be used to show users how development decisions are made.

    A nice additional feature would be a way to bookmark a set of slider values, so that it can be shared with others.

  • by kevsim on 3/7/22, 5:05 PM

    This is cool!

    It's kind of similar to what the RICE/ICE frameworks are trying to help achieve [0].

    We built some scoring of impact/effort into our tool Kitemaker [1] and allow teams to prioritize their work by these things. We ended up going with really simple scores like S/M/L since it's super hard to know the difference between a 6 and a 7 (and it probably doesn't really matter anyway).

    0: https://medium.com/glidr/how-opportunity-scoring-can-help-pr...

    1: https://kitemaker.co

  • by motohagiography on 3/7/22, 4:21 PM

    This is ideal as a teaching tool. Related to another thread (dictator's handbook thread) I used salience models to help make feature decisions, which are a lot like story points poker without the overhead, but the effect is the same. The key challenge is getting honest assessments from people of the questions. Most people when provided a model will ask, "sure, how do I get it to say what I want it to say?" and if it doesn't say that, they won't accept that their desire is thwarted by principle.

    Such a useful tool, and I foresee referring to it regularly.

  • by rhynlee on 3/7/22, 4:18 PM

    Interesting mental model you made here, I like it! Stuff like this definitely helps get people into system 2 thinking instead of going with intuition and in this case that's probably a good thing.

    I thought seeing charts of how the answer would change along with each slider value for a given value range might help as others have mentioned it's not too easy to answer the questions accurately. Could help handle uncertainty since people would then be able to understand the range of answers in between their "best case" and "worst case"

  • by dudul on 3/7/22, 2:47 PM

    I like the idea, but unfortunately, with the exception of the first slider, they are all subjective and hard to quantify.

    What's a technical effort of 6 vs 4? What's a technical debt of 8 vs 6 or 7?

  • by RexM on 3/7/22, 5:19 PM

    Seems like it's missing a tick box that asks if it's already been promised to a customer. That would automatically override everything and make it say "Yes, build it."
  • by smoe on 3/7/22, 4:46 PM

    Just a bit of a nitpick (or issue for non native speaker) maybe, the answers are confusing in relationship to the title.

    Title: Don't build (or build) that feature

    Answer: Yes

    I think the way I selected the questions (high impact, low effort) it should tell me to build, but as I read it, the tool tells me either to not build or answers an OR question with yes or no.

  • by hardwaresofton on 3/6/22, 2:16 PM

    Love the stuff that ZeroDHA builds (I'm a heavy user of Listmonk[0]). Maybe this is the secret to them shipping so consistently and high impact features!

    [0]: https://github.com/knadh/listmonk

  • by tantalor on 3/7/22, 3:48 PM

    Besides the first, all the measures here are relative, so I suppose it is up to the user to calibrate.

    Take build cost. Suppose a project would take 2 engineers 4 weeks to build. A large team may call that a "2" but a small team would call it a "8".

  • by m3047 on 3/7/22, 5:26 PM

    Reminds me of https://weekdone.com/ for some reason. It should be a Jira plugin where the team can vote on each of the questions. =)
  • by metanonsense on 3/7/22, 3:59 PM

    I like the idea. Some corner-cases yield odd results, however, e.g. 1,10,10,1,1
  • by pete_nic on 3/7/22, 1:49 PM

    Great job on this decision making tool. One thing that might be missing - the business viability of the feature. All else being equal you prioritize features that add value to the company that builds them.
  • by Pxtl on 3/7/22, 5:50 PM

    Using decimals instead of going 0-10 so 5 could be the halfway mark (or 1-5 with 3 as halfway) is kind of an example of "don't build that feature".
  • by faeyanpiraat on 3/7/22, 7:03 PM

    I wouldn’t show the result in realtime, as it allows fine tuning parameters to get to a result I want, and not one I need.
  • by troebr on 3/7/22, 3:28 PM

    Maybe it should say "build" or "don't build" instead of yes/no.
  • by azhenley on 3/7/22, 5:15 PM

    The value seems to range from 5% to 95%, is that to leave room for uncertainty?
  • by p0nce on 3/7/22, 8:02 PM

    Features are a last resort proposition.
  • by nada_ss on 3/6/22, 11:37 AM

    this is a survey?