by wei_jok on 10/29/19, 2:18 AM with 36 comments
by etbebl on 10/30/19, 5:32 AM
The fact is, there have been neuroscientists working with neural network models with greater and lesser complexity than DNNs for decades. They've been utilized to great profit outside of neuroscience lately, but that doesn't make them not an abstraction of some aspects of cortical computation.
We don't quite understand how brains could perform or approximate backprop yet, but it's the only training algorithm that has been remotely successful at training networks deep enough to do human-like visual recognition. So many people take that as a big clue as to what we should be looking for in the brain to explain its great performance and ability to learn, rather than a reason to disqualify DNNs entirely.
There's plenty of modeling work going on with more traditional biophysical models, such as those that include spiking, interneuron compartments, attractor dynamics, etc. This is just an attempt to also come at the problem from the other direction, starting from something that we know works well (for vision) and trying to figure out how to ground it in biophysical reality.
by bra-ket on 10/29/19, 4:13 PM
by endswapper on 10/29/19, 7:56 PM
by dr_dshiv on 10/29/19, 10:54 AM
by coward12345678 on 10/29/19, 3:42 PM
Looking at this article, i wonder if we'll ever be able to figure any of this out. I feel pretty hopeless about the entire situation.
by endswapper on 10/31/19, 9:02 PM
https://singularityhub.com/2019/10/03/deep-learning-networks...
by tudorw on 10/29/19, 11:43 AM