by seagullz on 12/24/20, 6:47 PM with 120 comments
by klenwell on 12/24/20, 7:58 PM
I'm not a data scientist and I've never encountered that term "provenance" before but I've encountered the problem he talks about in the wild here and there and have searched for a good way to describe it. His ultrasound example is a great, chilling, example of it.
I also like the term "Intelligence Augmentation" (IA). I've worked for a couple companies who liberally sprinkled the term AI in their marketing content. I always rolled my eyes when I came across it or it came up in say a job interview. What we were really doing, more practically and valuably, was this: IA through II (Intelligent Infrastructure), where the Intelligent Infrastructure was little more than a web view on a database that was previously obscured or somewhat arbitrarily constrained to one or two users.
by nicholast on 12/25/20, 2:49 AM
by boltzmannbrain on 12/24/20, 9:24 PM
by joe_the_user on 12/24/20, 10:05 PM
"Adaptive Intelligence" might be described as the ability to be given a few instructions, gather some information and take actions that accomplish the instructions. It's what "underlings", "minions" do.
But if we look at deep learning, it's almost the opposite of this. Deep learning begins with an existing stream of data, a huge stream, large enough that the system can just extrapolate what's in the data, include data leads to what judgements. And that works for categorization and decision making the duplicates what decisions humans make or even duplicates what works, what wins in a complex interaction process. But all that doesn't involve any amount of adaptive intelligence. It "generalizes" something but our data scientists have no idea exactly what.
The article proposes an "engineering" paradigm as an alternative to the present "intelligence" paradigm. That seems more sensible, yes. But I'm doubtful this could accepted. Neural network AI seems like a supplement to the ideology of unlimited data collection. If you put a limit on what "AI" should do, you'll put a limit on the benefits of "big data".
by nextos on 12/24/20, 7:48 PM
by bob1029 on 12/24/20, 11:01 PM
The AI revolution is very likely something that will require a fundamental reset of our understanding of the problem domain. We need to identify a way to attack the problem in such a way that we can incrementally scale all aspects of intelligence.
The only paradigm that I am aware of which seems to hint parts of the incremental intelligence concept would be the relational calculus (aka SQL). If you think very abstractly about what a relational modeling paradigm accomplishes, it might be able to provide the foundation for a very powerful artificial intelligence. Assuming your domain data is perfectly normalized, SQL is capable of exploring the global space of functions as they pertain to the types. This declarative+functional+relational interface into arbitrary datasets would be an excellent "lower brain", providing a persistence & functional layer. Then you could throw a neural network on top of this to provide DSP capabilities in and out (ML is just fancy multidimensional DSP).
If you know SQL you can do a lot of damage. Even if you aren't a data scientist or have a farm of Nvidia GPUs, you can still write ridiculously powerful queries against domain data and receive powerful output almost instantaneously. The devil is in the modeling details. You need to normalize everything very strictly. 20-30 dimensions of data derived into a go/no-go decision can be written in the same # of lines of SQL if the schema is good. How hard would this be on the best-case ML setup? Why can't we just make the ML write the SQL? How hard would it be for this arrangement to alter its own schema over time autonomously?
by MichaelRazum on 12/24/20, 7:55 PM
by xmo on 12/24/20, 8:03 PM
by lifeisstillgood on 12/24/20, 10:05 PM
Hang on - uptick in diagnosis (ie post amniocentesis) or uptick in indicators. One indicates unnecessary procedures, one indicates a large population of previously undiagnosed downs ....
One assumes the indicator - and greatly hope there is improved detection as I had at least one of these scares with my own kids
by cosmodisk on 12/25/20, 1:38 AM
by dhairya on 12/24/20, 10:45 PM
I've been on both sides of table (started in industry developing AI solutions and now in academia pursuing phd in AI). When I was on the industry side, where the information and infrastructure was there to build such a system, you had to deal with the bureaucracy and institutional politics.
In academia, the incentives are aligned for individual production of knowledge (publishing). The academic work focuses on small defined end-to-end problems that are amenable to deep learning and machine learning. The types of AI models that emerge are specific models solving specific problems (NLP, vision, play go, etc).
It seems to move towards developing large AI systems we need a model of new collaboration. There are existing models in the world of astrophysics and medical research that we can look to for inspiration. Granted they have they have their own issues of politics but it's interesting that similar scope projects haven't emerged on the AI side yet.
by ridgeflex on 12/25/20, 2:07 AM
However, achieving near-human level accuracy on tasks such as classifying images of cars or road signs would be immensely useful to the proposed II-type system that handles large-scale self-driving transportation (individual cars would conceivably need the ability to understand their local environments and communicate this to the overall network).
I agree with his argument that there should be a shift in the way we think about problems in "AI", but I don't think we should necessarily think that progress in human-imitative AI problems and IA/II problems are mutually exclusive.
by spicyramen on 12/25/20, 1:52 AM
by esc_colon_q on 12/25/20, 6:07 AM
I broadly agree with what this article says, but depending how you define "foreseeable future" I find this to be a dangerously naive viewpoint that just assumes nothing will change quickly.
I'm not stupid enough to say abstract reasoning about the real world is a simple problem or right around the corner, but there's no evidence so far to indicate it's much further off than, say, object recognition was when Minsky (or more likely Papert, apparently?) assigned it as an undergrad project. We pour exponentially more money into research each year, and have more and better hardware to run it on. We're going to hit the ceiling soon re: power consumption, sure, but some libraries are starting to take spiking hardware seriously which will open things up a few orders of magnitude. There are dozens of proposed neural architectures which could do the trick theoretically, they're just way too small right now (similar to how useless backprop was when it was invented).
Are we a Manhattan Project or three away from it? Sure. That's not nothing, but we're also pouring so much money into the boring and immediately commercializable parts of the field (all the narrow perception-level and GAN can that NeurIPS is gunked up with) that if any meaningful part of that shifted to the bigger problems, we'd see much faster progress. That will happen in a massive way once someone does for reasoning what transformers did for text prediction: just show that it's tractable.
by gandutraveler on 12/25/20, 1:33 PM
by dang on 12/25/20, 1:33 AM
by ipnon on 12/24/20, 9:09 PM
AI engineering?
Cybernetic engineering?
Data engineering?
by mark_l_watson on 12/25/20, 2:02 PM
The multidisciplinary conversations during The Great AI Debate #2 two nights ago were certainly entertaining, but also laid out good ideas about tech approaches and also the desires of AI researchers - what they hope AIs will be like. Good job by Gary Marcus.
I work for a medical AI company and we are focused on benefits to humans. While in the past I have been been a fan of AI technologies from Google, FB, etc., now I believe that both consumers and governments must fight back hard against business processes that do not in general benefit society. Start by reading Zubroff’s Surviving Surveillance Capitalism book, and the just published book Power of Privacy.
by bitL on 12/25/20, 12:15 AM
by xiphias2 on 12/24/20, 11:45 PM
Classifying images was always classified as a problem that can't be solved with statistical analysis. Deep learning layers are beyond human understanding, so in my view artificial intelligence happened, even though it's not yet as intelligent as humans.
by wildermuthn on 12/24/20, 11:01 PM
But the distinction he makes between ML and AI is crucial. What he’s really talking about is AGI - general intelligence. And he’s right - we don’t have a single example of AGI to date (few or single shot models withstanding, as they are only so for narrow tasks).
The majority mindset in AI research seems to be (and I could be wrong here, in that I only read many ML papers) that the difference between narrow AI and general AI is simply one of magnitude - that GPT-3, given enough data and compute, would pass the Turing test, ace the SAT, drive our cars, and tell really good jokes.
But this belief that the difference between narrow and general intelligence is one of degree rather than kind, may be rooted in what this article points out: in the historical baggage of AI almost always signifying “human imitative”.
But there is no reason that AGI must be super intelligent, or human-level intelligent, or even dog-level intelligent.
If narrow intelligence is not really intelligence at all (but more akin to instinct), then the dumbest mouse is more intelligent than AlphaGo and GPT-3, because although the mouse has exceedingly low General Intelligence, AlphaGo and GPT-3 have none at all.
There is absolutely nothing stopping researchers from focusing on mouse-level AGI. Moreover, it seems likely that going from zero intelligence to infinitesimal intelligence is the harder problem than going from infinitesimal intelligence to super-intelligence. The latter may merely be an exercise in scale, while the former requires a breakthrough of thought that asks why a mouse is intelligent but an ant is not.
The only thing stopping researchers is that when answering this question, the answer is really uncomfortable, and outside their area of expertise, and has weighty historical baggage. It takes courage of researchers like Yoshua Bengio to utter the word “consciousness”, although he does a great job by reframing it with Thinking Fast and Slow’s System 1/2 vocabulary. Still, the hard problem of consciousness, and the baggage of millennia of soul/spirit as an answer to that hard problem, makes it exceedingly difficult for well-trained scientists to contemplate the rather obvious connection between general intelligence and conscious reasoning.
It’s ironic that those who seek to use their own conscious reasoning to create AGI are in denial that conscious reasoning is essential to AGI. But even if consciousness and qualia are a “hard”problem that we cannot solve, there’s no reason to shelve the creation of consciousness as also “hard”. In fact, we know (from our own experience) that the material universe is quite capable of accidentally creating consciousness (and thus, General Intelligence). If we can train a model to summarize Shakespeare, surely we can train a model to be as conscious, and as intelligent, as a mouse.
We’re only one smart team of focused AI researchers away from Low-AGI. My bet is on David Ha. I eagerly await his next paper.
by ksec on 12/24/20, 10:11 PM
And AI is like.... Fusion? We are always another 50 years away.
by soupson on 12/25/20, 7:35 AM
by reshie on 12/25/20, 9:33 AM
by fuckminster_b on 12/25/20, 5:59 AM
Then I learned about Bayesian statistics and watched a talk by a senior LLNL statistician who is actually marketing 'AI' products/services as a side gig.
When I realized what 'deep learning' actually is I was disappointed, unsure if I had mistakenly oversimplified the subject matter - until said senior statistician spelled out loud what I was thinking, in her talk: the 'understanding' a machine can currently attain of its input is quite like the understanding a pocket calculator can achieve of maths.
Guess humanity is off the hook for now. Phew.
I have doubts whether 'strong AI' is even technologically possible, since even accurately simulating a human mind, this simulation would be necessarily constrained to run orders of magnitude slower than the reality it is designed to model.
'Training' it with data so to allow it the opportunity to reason and thereby synthesize a conclusion not already contained in the data fed to it might take longer than a researcher would be able to in a life time.
When was the last time a generation-spanning endeavour worked out as planned for (the West)?
I wish people would stop calling what currently passes for 'Machine Learning' as 'AI'. Literally the same level of 'intelligence' we already had in the 80s, AFAIR we called it 'Fuzzy Logic' then.
Secretly an admission, that Hollywood basically licensed the narrative of imminent runaway artificial consciousness back to science would make me give it one final Chance to prove its aptitude at high-level human reasoning and get square with reality.
I'm not holding my breath.
by yalogin on 12/24/20, 10:00 PM
by drevil-v2 on 12/24/20, 10:04 PM
You have companies like Uber/Lyft/Tesla (and presumably the rest of the gig economy mob) waiting to put the AI into bonded/slave labor driving customers around 24/7/365.
If it truly is a Human level intelligence, then it will have values and goals and aspirations. It will have exploratory impulses. How can we square that with the purely commercial tasks and arbitrary goals that we want it to perform?
Either we humans want slaves that will do what we tell them to or we treat them like children who may or may not end up as the adults that their parents think/hope they will become? I doubt it is the later because why else would the billions of dollars investment being pumped into AI? They want slaves.