by adamesque on 7/1/24, 12:15 AM with 112 comments
by tylerneylon on 7/1/24, 6:51 AM
Context for the article: I'm working on an ambitious long-term project to write a book about consciousness from a scientific and analytic (versus, say, a meditation-oriented) perspective. I didn't write this fact in the article, but what I'd love to happen is that I meet people with a similar optimistic perspective, and to learn and improve my communication skills via follow-up conversations.
If anyone is interested in chatting more about the topic of the article, please do email me. My email is in my HN profile. Thanks!
by bubblyworld on 7/1/24, 7:53 AM
There's a theory that real brains subvert this, and what we perceive is actually our internal model of our self/environment. The only data that makes it through from our sense organs is the difference between the two.
This kind of top-down processing is more efficient energy-wise but I wonder if it's deeper than that? You can view perception and action as two sides of the same coin - both are ways to modify your internal model to better fit the sensory signals you expect.
Anyway, I guess the point I'm making is you should be careful which way you point your arrows, and of designating a single aspect of a mind (the action centre) as fundamental. Reality might work very differently, and that maybe says something? I don't know haha.
by privacyonsec on 7/1/24, 2:26 AM
by paulmooreparks on 7/1/24, 8:26 AM
by ilaksh on 7/1/24, 1:56 AM
Before that you can look into the AGI conference people like Ben Goertzel, Pei Wang. And actually the whole history of decades of AI research before it became about narrow AI.
I'd also like to suggest that creating something that truly closely simulates a living intelligent digital person is incredibly dangerous, stupid, and totally unnecessary. The reason I say that is because we already have superhuman capabilities in some ways, and the hardware, software and models are being improved rapidly. We are on track to have AI that is dozens if not hundreds of times faster than humans at thinking and much more capable.
If people succeed in making that truly lifelike and humanlike, it will actually out-compete us for resource control. And will no longer be a tool we can use.
Don't get me wrong, I love AI and my whole life is planned around agents and AI. But I no longer believe it is wise to try to go all the way and create a "real" living digital species. And I know it's not necessary -- we can create effective AI agents without actually emulating life. We certainly don't need full autonomy, self preservation, real suffering, reproductive instincts, etc. But that is the goal he seems to be down in this article. I suggest leaving some of that out very deliberately.
by devodo on 7/1/24, 2:40 PM
This is a very strong argument. Certainly all the ingredients to replicate a mind must exist within our physical reality.
But does an algorithm running on a computer have access to all the physics required?
For example, there are known physical phenomena, such as quantum entanglement, that are not possible to emulate with classical physics. How do we know our brains are not exploiting these, and possibly even yet unknown, physical phenomena?
An algorithm running on a classical computer is executing in a very different environment than a brain that is directly part of physical reality.
by monocasa on 7/1/24, 3:54 AM
https://en.wikipedia.org/wiki/Soar_%28cognitive_architecture...
by Jensson on 7/1/24, 5:38 AM
How would it intelligently do this? What data would you train on? You don't have trillions words of text where humans wrote what they thought silently interwoven with what they wrote publicly.
History has shown over and over that hard coded ad hoc solutions to these "simple problems" never work to create intelligent agents, you need to train the model to do that from the start you can't patch in intelligence after the fact. Those additions can be useful, but they have never been intelligent.
Anyway, such a model I'd call "stream of mind model" rather than a language model, it would fundamentally solve many of the problems with current LLM where their thinking is reliant on the shape of the answer, while a stream of mind model would shape its thinking to fit the problem and then shape the formatting to fit the communication needs.
Such a model as this guy describes would be a massive step forward, so I agree with this, but it is way too expensive to train, not due to lack of compute but due to lack of data. And I don't see that data being done within the next decade if ever, humans don't really like writing down their hidden thoughts, and you'd need to pay them to generate data amounts equivalent to the internet...
by abcde777666 on 7/2/24, 11:23 PM
For instance, the idea that we can neatly have the emotion system separate from the motor control system. Emotions are a cacophony of chemicals and signals traversing the entire body - they're not an enum of happy/angry/sad - we just interpret them as such. So you probably don't get to isolate them off in a corner.
Basically I think it's very tempting to severely underestimate the complexity of a problem when we're still only in theory land.
by m0llusk on 7/1/24, 1:02 PM
Very much looking forward to seeing continuing progress in all this.
by whitten on 7/1/24, 2:56 AM
https://kar.kent.ac.uk/21525/2/A_theory_of_the_acquisition_o...
Memory Organisation Packets might also deal with issues encountered.
https://www.cambridge.org/core/books/abs/dynamic-memory-revi...
by jcynix on 7/1/24, 6:47 PM
When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.
Minsky, as quoted in https://www.newyorker.com/magazine/1981/12/14/a-i
by visarga on 7/1/24, 6:40 AM
What is missing from this picture is the social aspect. No agent got too smart alone, it's always an iterative "search and learn" process, distributed over many agents. Even AlphaZero had evolutionary selection and extensive self play against its variants.
Basically we can think of culture as compressed prior experience, or compressed search.
by navigate8310 on 7/1/24, 4:42 AM
by freilanzer on 7/1/24, 9:18 AM
by sonink on 7/1/24, 5:43 AM
by mensetmanusman on 7/1/24, 4:19 AM
by Simplicitas on 7/1/24, 11:29 AM
by bbor on 7/1/24, 3:37 AM
by miika on 7/1/24, 3:17 PM
But of course we can be assured it’s not quite like that in reality. This is just another example of how our models for explaining the life are reflection of the current technological state.
Nobody considers that old clockwork universe now, and these AI inspired ideas are going to fall short all the same. Yet, progress is happening and all these ideas and talks are probably important steps that carry us forward.
by 0xWTF on 7/1/24, 3:50 AM
by antiquark on 7/1/24, 12:05 PM