by interconnector on 7/24/17, 6:26 PM with 58 comments
by interfixus on 7/25/17, 5:26 AM
A completely unfounded supposition, as so often appears to be the case when some human monopoly is claimed. We didn't magically sprout whole new categories of ability during a measly few million years of evolution.
Anecdotally, I see crows getting out out the way of my car. Not confused and haphazardly as many birds do, but in calculated, deliberate, unhurried steps to somewhere just outside my trajectory - steps which clearly takes into account such elements as my speed and the state of other traffic on the road. Furthermore, when it's season for walnuts and the like, they'll calmly drop their haul on the asphalt, expecting my tyres to crush it for them. This - in my rural bit of Northern Europe - appears to be a recent import or invention; I never saw it done until two years ago.
And there's The Case of the Dog and the Peanut Butter Jars. My dog, my peanut butter jars, and they were empty, but not cleaned. Alone at home, she found them one day, and clearly had experimented on the first one, which had bitemarks aplenty on the lid. The rest she managed to unscrew without damage. Having licked the jars clean, apparently she got to thinking of the grumpy guy who woul eventually be coming home. I can think of no other explanation why I found the entire stash of licked-clean jars hidden - although not succesfully - under a rug.
Tell me again about imagination and its distinctly human nature.
by ansgri on 7/24/17, 9:23 PM
Of course imagining possible outcomes before executing is useful! And it has many uses outside deep learning. No reason to reinvent new words, really. At least without referring to the established ones.
Maybe there is a serious novel idea, but I've missed it.
Basically, if you need to control a complex process (i.e. bring some future outcome in accordance to your plan), you can build a forward model of the system under control (which is simpler than a reverse model), and employ some optimization techniques (combinatorial, i.e. graph-based; numeric derivative-free, i.e. pattern-search; or differential) to find the optimal current action.
by gradstudent on 7/25/17, 1:27 AM
Looking at the first paper (https://arxiv.org/pdf/1707.06170.pdf), it seems surprisingly shallow and light on details. So they have a learning system for continuous planning. So what? The AI Planning community has been doing this for ages with MDPs and POMDPs, solving problems where the planning domain has some discrete variables and some continuous variables. Here's a summary tutorial from Scott Sanner at ICAPS 2012: http://icaps12.icaps-conference.org/planningschool/slides-Sa...
Speaking of ICAPS: this conference is the primary venue for disseminating scientific results to researchers in the area. Yet the authors here cite exactly one ICAPS paper. WTF?
My bullshit detector is blaring.
by aqsalose on 7/24/17, 10:05 PM
As far as I do understand the papers, their model builds (in unsupervised fashion which sounds very cool) an internal simulation of the agent's environment and runs it to evaluate different actions, so I can see why they'd call it imagination / planning, because that's the obvious inspiration for the model and so it sort of fits. But in common parlance, "imagination" [1] also means something that relatively conscious agents do, often with originality, and it does not seem that their models are yet that advanced.
I'm tempted to compare the choice of terminology to DeepDream, which is not exactly a replication of the mental states associated with human sleep, either.
by jtraffic on 7/25/17, 1:15 AM
In the past, when I post exact duplicates, HN redirects me and automatically upvotes the original instead. I wonder why this doesn't always happen. (I'm not bothered, just curious.)
Double off topic: It's very interesting to see how much difference timing makes. My original had a single upvote, and this hit the front page.
by seanwilson on 7/25/17, 12:31 AM
by GuiA on 7/24/17, 9:29 PM
Shouldn't imagination and planning be observed spontaneously as emergent properties of a sufficiently complex neural network? Conversely, if we have to explicitly account for these properties and come up with specific designs to emulate them, how do we know that we are on the right track to beyond human levels of cognition, and not just building "one-trick networks"?
by deepnet on 7/25/17, 10:06 AM
I was under the impression that AlphaGo makes no plan but responds to the current board state with expert move probabilites that prunes MCTS random playouts.
There is no plan (AFAIK) or strategy in the AlphaGo papers so I find this statement that AlphaGo is an imaginative planner quite curious.
Perhaps someone can reconcile these statements or correct my knowledge of AlphaGo ?
Very interesting papers, it will be nice to see the imagination encoder methods applied to highly stochastic enviroments or indeed a robot in the real world.
by mehh on 7/26/17, 8:11 AM
I'm sure the guys who wrote this are smart enough to know its not imagination (perhaps arguably a small subset of the attributes that contribute to what we know as imagination, but not imagination itself).
Which leads me to assume this hyperbole is there purely for the benefit of PR and stock price.
by ww520 on 7/24/17, 11:55 PM
by miguelrochefort on 7/25/17, 1:16 AM
Are they introducing something new, or is it just gimmick and buzzwords?
by thinkloop on 7/25/17, 2:17 AM
Right...