from Hacker News

Ask HN: How to explain that ML approach is not 100% perfect?

by masterchief1s1k on 3/23/22, 4:25 AM with 4 comments

Hi, I am having problem explaining to my team members and employer the efficiency of my ML models.

People in software companies tend to think that since machines can perform perfect logical and mathematical operations, anything run within them will also inherit this property, including AI.

Currently no matter how hard I try to improve my AI model (generating more data, applying different augmentation, model architecture,..etc) they will still trying to find input data that proved my model is not "generalized" enough.

  • by simondebbarma on 3/23/22, 5:41 AM

    This is a very good visual introduction to Machine Learning created by the team at R2D3 and helps even non-tech people understand how ML works and that training a model is balancing bias and variance error rates. Part 1[0] goes over what decision trees are, and Part 2[1] goes over Bias-Variance Tradeoff.

    [0] https://r2d3.us/visual-intro-to-machine-learning-part-1/

    [1] https://r2d3.us/visual-intro-to-machine-learning-part-2/

  • by jstx1 on 3/23/22, 11:59 AM

    What's there to explain? Tell people that it isn't perfect, and they seem to know this already.

    Propose to share a proportional number of correctly predicted examples for every incorrect example they come up with? (Okay, don't actually do this)

    Who is criticising your work? Manager/colleague/stakeholder/C-level executive? Does it matter if they are criticising it? What happens if you just shrug and keep doing what you're doing?

    The whole thing seems like a communication/political problem, not a technical one, and it's hard to give advice when we don't know the specifics.