by knadh on 3/6/22, 11:24 AM with 42 comments
by axg11 on 3/7/22, 2:55 PM
> How valuable is the feature to the target users, truly?
Take this question for example. Accurately answering it is not always possible. A common mistake is to ask your users or take their word for how valuable they perceive a feature to be. That approach is better than nothing, but can often lead teams astray. This isn't because your users are stupid, it's because your users don't have the perspective that you have in terms of (a) what is possible, and (b) the knock-on effects of the feature on other aspects of the software value proposition.
Note: the above is _not_ true about bugs. If a new feature is actually a bug/issue fix raised by your users, they are usually right.
> What is the time, technical effort, and cost to build the feature?
Estimating technical effort is so difficult that it is an entire field in itself. When working on complex systems, you also have to consider the future complexity introduced when building on top of the new feature (linked to the last question).
by chrismorgan on 3/7/22, 2:40 PM
Missed an opportunity to present the “don’t build” reasoning! :-)
by mdeck_ on 3/7/22, 2:48 PM
https://www.intercom.com/blog/rice-simple-prioritization-for...
by blowski on 3/7/22, 2:24 PM
by lucideer on 3/7/22, 6:53 PM
If a very complex feature is of true high value to 90% of my users, it seems uncontroversially worthwhile: the tool gives me "No, but a close call (48%)"
I'd suggest putting a little more weight on value & user importance and a little less weight on complexity/effort.
Otherwise, GREAT tool. Even just as an aid to get across the idea that some features should not be built, which is often not understood.
*for reference, weights are currently as follows:
# users: 10
effort: -15
user importance: 20
biz complexity: -15
value: 20
risk: -20
by smoyer on 3/7/22, 3:54 PM
by moasda on 3/6/22, 9:35 PM
Why is the result always between 5% and 95%?
by laurent123456 on 3/7/22, 1:37 PM
A nice additional feature would be a way to bookmark a set of slider values, so that it can be shared with others.
by kevsim on 3/7/22, 5:05 PM
It's kind of similar to what the RICE/ICE frameworks are trying to help achieve [0].
We built some scoring of impact/effort into our tool Kitemaker [1] and allow teams to prioritize their work by these things. We ended up going with really simple scores like S/M/L since it's super hard to know the difference between a 6 and a 7 (and it probably doesn't really matter anyway).
0: https://medium.com/glidr/how-opportunity-scoring-can-help-pr...
by motohagiography on 3/7/22, 4:21 PM
Such a useful tool, and I foresee referring to it regularly.
by rhynlee on 3/7/22, 4:18 PM
I thought seeing charts of how the answer would change along with each slider value for a given value range might help as others have mentioned it's not too easy to answer the questions accurately. Could help handle uncertainty since people would then be able to understand the range of answers in between their "best case" and "worst case"
by dudul on 3/7/22, 2:47 PM
What's a technical effort of 6 vs 4? What's a technical debt of 8 vs 6 or 7?
by RexM on 3/7/22, 5:19 PM
by smoe on 3/7/22, 4:46 PM
Title: Don't build (or build) that feature
Answer: Yes
I think the way I selected the questions (high impact, low effort) it should tell me to build, but as I read it, the tool tells me either to not build or answers an OR question with yes or no.
by hardwaresofton on 3/6/22, 2:16 PM
by tantalor on 3/7/22, 3:48 PM
Take build cost. Suppose a project would take 2 engineers 4 weeks to build. A large team may call that a "2" but a small team would call it a "8".
by m3047 on 3/7/22, 5:26 PM
by metanonsense on 3/7/22, 3:59 PM
by pete_nic on 3/7/22, 1:49 PM
by Pxtl on 3/7/22, 5:50 PM
by faeyanpiraat on 3/7/22, 7:03 PM
by troebr on 3/7/22, 3:28 PM
by azhenley on 3/7/22, 5:15 PM
by p0nce on 3/7/22, 8:02 PM
by nada_ss on 3/6/22, 11:37 AM