by masterchief1s1k on 3/23/22, 4:25 AM with 4 comments
People in software companies tend to think that since machines can perform perfect logical and mathematical operations, anything run within them will also inherit this property, including AI.
Currently no matter how hard I try to improve my AI model (generating more data, applying different augmentation, model architecture,..etc) they will still trying to find input data that proved my model is not "generalized" enough.
by simondebbarma on 3/23/22, 5:41 AM
[0] https://r2d3.us/visual-intro-to-machine-learning-part-1/
[1] https://r2d3.us/visual-intro-to-machine-learning-part-2/
by jstx1 on 3/23/22, 11:59 AM
Propose to share a proportional number of correctly predicted examples for every incorrect example they come up with? (Okay, don't actually do this)
Who is criticising your work? Manager/colleague/stakeholder/C-level executive? Does it matter if they are criticising it? What happens if you just shrug and keep doing what you're doing?
The whole thing seems like a communication/political problem, not a technical one, and it's hard to give advice when we don't know the specifics.