by mongodude on 1/10/18, 9:09 PM with 1 comments
by PaulHoule on 1/11/18, 1:48 PM
So far as accuracy expectations go, those have to be managed. There are real requirements for particular applications, and you have to do "well enough". Sometimes the status quo is pretty bad and it is easy to improve on; sometimes there are regulatory requirements or political (within the organization) concerns that will make A.I. a tough sell.
BTW, any commercially useful A.I. is "superhuman" in some way in the same sense that a pocket calculator or bulldozer is superhuman. If a human is "better" at doing something than an A.I., then there is no point of developing an A.I.