from Hacker News

How deep is the brain? The shallow brain hypothesis

by vapemaster on 10/30/23, 12:28 AM with 178 comments

  • by audunw on 10/30/23, 11:06 AM

    I seem to remember research stating that an individual neuron has very complex behaviour that requires several ML “neurons” / nodes to simulate. So if you do a comparison, perhaps the brain is deeper than you’d think by just looking at the graph of neurons and their synapses.

    Could we construct a neutral net from nodes with more complex behaviour? Probably, but in computing we’ve generally found that it’s best to build up a system from simple building blocks. So what if it takes many ML nodes to simulate a neuron? That’s probably an efficient way to do it. Especially in the early phase where we’re not quite sure which architecture is the best. It’s easier to experiment with various neural net architectures when the building blocks are simple.

  • by chriskanan on 10/30/23, 12:00 PM

    The brain has a lot of skip connections and is massively recurrent. In a sense, the brain can be thought of as having infinite depth due to recurrent thalamno-cortical loops. They do mention thalamno-cortical loops in the paper, so I think a more concrete definition of what is meant by "depth" would be helpful.
  • by sheeshkebab on 10/30/23, 1:26 AM

    It’s indeed odd that current dnn’s require massive amount of energy to retrain and lack any kind of practical continuous adaptation and learning.
  • by beaugunderson on 10/30/23, 4:11 AM

  • by hliyan on 10/30/23, 5:05 AM

    "brain seems shallow and neural networks are deep, ergo neural networks are doing it wrong"

    Please don't claim things the author didn't. What I read was "ergo (artificial) neural networks may be missing a trick"

  • by rsrsrs86 on 10/30/23, 3:09 PM

    Beyond the mere topological metaphor of neural networks there is almost nothing in common between brains and widigital computation. This is a widespread fallacy of category.
  • by lawrenceyan on 10/30/23, 3:18 PM

    We have skip connections and recurrent neural networks at home.
  • by jakobson14 on 10/30/23, 1:36 AM

    If I had a nickel for every time some neurologist tried to compare brains to neural networks. It's a surefire way to tell someone is either desperate for grant money or has been smoking crack. (previously: comparing brains and "electronic computers")

    Their entire article hinges on the complaint "brain seems shallow and neural networks are deep, ergo neural networks are doing it wrong."

    Neurologists seem to have a really hard time comprehending that researchers working on neural networks aren't as clueless about computers as neurology is about the brain. They also vastly overestimate how much engineers working on neural networks even care about how biological brains work.

    Virtually every attempt at making neural networks mimic biological neurons has been a miserable failure. Neural networks, despite their name, don't work anything like biological neurons and their development is guided by a combination of

    A) practical experimentation and refinement, and

    B) real, actual understanding about how they work.

    The concept of resnets didn't come from biology. It came from observations about the flow of gradients between nodes in the computational graph. The concept of CNNs didn't come from biology, it came from old knowledge of convolutional filters. The current form and function of neural networks is grounded in repeated practical experimentation, not an attempt to mimic the slabs of meat that we place on pedestals. Neural networks are deep because it turns out hierarchical feature detectors work really well, and it doesn't really matter if the brain doesn't do things that way.

    And then you have the nitwits searching the brain for transformer networks. Might as well look for mercury delay line memory while you're at it. Quantum entanglement too.

  • by phlogisticfugu on 10/30/23, 5:12 AM

    deep learning models have already been permitting "shallow signals" for a while. see "skip connections"

    https://theaisummer.com/skip-connections/

  • by spacetimeuser5 on 10/30/23, 8:34 PM

    Who finally cares how exactly an ANN matches a human brain? Is such ANN smarter than ChatGPT?

    It is more useful to use AI to develop more ecologically valid measurement methods for biology.

  • by MagicMoonlight on 10/30/23, 5:55 AM

    If it was shallow then it wouldn’t take 25 years for a human brain to fully train. The fact that some parts of it need that much data mean they must be way up the hierarchy.
  • by Salgat on 10/30/23, 2:52 AM

    The brain communicates with itself, so deep layers are equivalent to sections of the brain talking to each other. The only relevance white matter depth has is with regard to how it's trained, and since it doesn't use gradient descent, it's irrelevant to neural networks in that regard.
  • by lawlessone on 10/30/23, 3:51 PM

    So does this mean DNN are in some ways deeper than human brains?
  • by bjornsing on 10/30/23, 9:24 AM

    > This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks.

    As I understand it the thalamus is basically a giant switchboard though. I see no reason to believe that it never connects the output of one cortical area to the input of another, thus doubling the effective depth of the neural network. (I haven’t read this paper though, as it was behind a paywall.)

  • by Simon_ORourke on 10/30/23, 7:10 AM

    Judging by some of the levels of driving around these parts, the brain may be very shallow indeed.
  • by low_tech_punk on 10/30/23, 5:44 AM

    Replay of Jeff Hawkins group’s A Thousand Brains theory?