by vapemaster on 10/30/23, 12:28 AM with 178 comments
by audunw on 10/30/23, 11:06 AM
Could we construct a neutral net from nodes with more complex behaviour? Probably, but in computing we’ve generally found that it’s best to build up a system from simple building blocks. So what if it takes many ML nodes to simulate a neuron? That’s probably an efficient way to do it. Especially in the early phase where we’re not quite sure which architecture is the best. It’s easier to experiment with various neural net architectures when the building blocks are simple.
by chriskanan on 10/30/23, 12:00 PM
by sheeshkebab on 10/30/23, 1:26 AM
by beaugunderson on 10/30/23, 4:11 AM
by hliyan on 10/30/23, 5:05 AM
Please don't claim things the author didn't. What I read was "ergo (artificial) neural networks may be missing a trick"
by rsrsrs86 on 10/30/23, 3:09 PM
by lawrenceyan on 10/30/23, 3:18 PM
by jakobson14 on 10/30/23, 1:36 AM
Their entire article hinges on the complaint "brain seems shallow and neural networks are deep, ergo neural networks are doing it wrong."
Neurologists seem to have a really hard time comprehending that researchers working on neural networks aren't as clueless about computers as neurology is about the brain. They also vastly overestimate how much engineers working on neural networks even care about how biological brains work.
Virtually every attempt at making neural networks mimic biological neurons has been a miserable failure. Neural networks, despite their name, don't work anything like biological neurons and their development is guided by a combination of
A) practical experimentation and refinement, and
B) real, actual understanding about how they work.
The concept of resnets didn't come from biology. It came from observations about the flow of gradients between nodes in the computational graph. The concept of CNNs didn't come from biology, it came from old knowledge of convolutional filters. The current form and function of neural networks is grounded in repeated practical experimentation, not an attempt to mimic the slabs of meat that we place on pedestals. Neural networks are deep because it turns out hierarchical feature detectors work really well, and it doesn't really matter if the brain doesn't do things that way.
And then you have the nitwits searching the brain for transformer networks. Might as well look for mercury delay line memory while you're at it. Quantum entanglement too.
by phlogisticfugu on 10/30/23, 5:12 AM
by spacetimeuser5 on 10/30/23, 8:34 PM
It is more useful to use AI to develop more ecologically valid measurement methods for biology.
by MagicMoonlight on 10/30/23, 5:55 AM
by Salgat on 10/30/23, 2:52 AM
by lawlessone on 10/30/23, 3:51 PM
by bjornsing on 10/30/23, 9:24 AM
As I understand it the thalamus is basically a giant switchboard though. I see no reason to believe that it never connects the output of one cortical area to the input of another, thus doubling the effective depth of the neural network. (I haven’t read this paper though, as it was behind a paywall.)
by Simon_ORourke on 10/30/23, 7:10 AM
by low_tech_punk on 10/30/23, 5:44 AM