by imaurer on 11/20/21, 4:24 PM with 116 comments
by a-dub on 11/20/21, 7:20 PM
this is the thing i've always sort of loved about philosophy. they just kinda make shit up, provide their own definitions that are rooted in a bamboozling by use of flowery language, and then once they've stated all their definitions with their conclusions baked in, they hop, skip and jump down the path which now obviously leads to the conclusion they started with.
it's kind of like a form of mathematics where they define their own first principles in each argument with the express purpose of trying to build the most beautiful path to their conclusions. it really is a beautiful form of art, like architecture for ideas.
by dsr_ on 11/20/21, 5:27 PM
In particular, it is equally incredible that intelligent life should evolve from a single-cell organism. But we have that as a counter-argument.
It is entirely reasonable to suspect that none of the current approaches will yield success, but claiming that no machine intelligences can possibly arise is... incredible.
by _aavaa_ on 11/20/21, 7:03 PM
by R0b0t1 on 11/20/21, 5:10 PM
Speaking specifically of neural networks as they exist now the answer is no because there is no obvious way to learn.
by doganulus on 11/20/21, 9:23 PM
by jmull on 11/20/21, 5:47 PM
In the part I read it claims we can’t develop AI because we can’t accurately model full reality. There’s no argument about what the connection there is, it’s just stated.
Kind of obviously, if we assume engaging with reality is necessary to develop intelligence, an artificial intelligence could do so in a similar way we non-artificial ones do, right?
by snek_case on 11/20/21, 5:54 PM
AI hasn't mastered common-sense reasoning yet. That's likely going to come last, but the amount of things AI can understand is set to only expand IMO.
by erdewit on 11/20/21, 8:33 PM
by visarga on 11/20/21, 8:12 PM
by Traubenfuchs on 11/20/21, 7:54 PM
There is absolutely no reason why this shouldn‘t be possible. Actually, we could already do it if we understood the brain enough and could model it good enough, even if the emulation might not be real time.
by go_elmo on 11/20/21, 9:49 PM
by mcguire on 11/20/21, 8:16 PM
Ok, I don't like the mathematical definitions of intelligence either (although I might be convincable and they do have some advantages over other definitions I've seen), but this refutation seems to be a prime example of proof-by-assertion.
"Brooks defines an AI agent, again, as an artefact that is able ‘to move around in dynamic environments, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction'."
And this definition implies many things we know to be intelligent (i.e. people) are not. So there's that.
"There are three additional properties of logic systems of importance for our argument here: 1. Their phase space is fixed. 2. Their behaviour is ergodic with regard to their main functional properties. 3. Their behavior is to a large extent context-independent."
Aaaaand here we go...
"As we learn from we standard mathematical theory of complex systems [23], all such systems, including the systems of complex systems resulting from their interaction, 1. have a variable phase space, 2. are non-ergodic, and 3. are context-dependent."
Ok, to the extent that the first statement is true about "logic systems", it is also true about any physically realizable, material system. On the other hand, the "complex system", to that same extent, is not physically realizable. (Consider "a variable phase space means that the variables which define the elements of a complex system can change over time" or "a non-ergodic system produces erratic distributions of its elements. No matter how long the system is observed, no laws can be deduced from observing its elements." and question how much information is required for this in the authors' sense.)
And there we have the intrusion of the immortal soul into the argument that artificial intelligence is impossible.
by natch on 11/20/21, 9:23 PM
“Department of Philosophy”
hmm
by SubiculumCode on 11/20/21, 6:29 PM
by mensetmanusman on 11/20/21, 11:18 PM
by tehchromic on 11/20/21, 8:53 PM
I'll put my argument out there and let the flames come as they will.
Strong AI is about as likely to emerge from our current state of the art AI machinery as it is to emerge suddenly out of moon rocks. That's to say the fear of machines becoming self-conscious and posing an existential threat to us, especially replacing us in the evolutionarily sense, is completely unfounded.
This isn't to say that building machines capable of doing exactly that isn't possible - we and all living things are proof that it's possible - it's to say that achieving this level of engineering is on par with intergalactic mass transit or Dyson spheres - way out of our league for the foreseeable. And, even if we had the technology, it would be so entirely foolish to undertake that no sentient species would do it.
That said, there's a substantial argument to make that we will augment ourselves with our own machinery so throughoughly that we will become unrecognizable and in effect, accomplish the same task through merging with the machine. This is likely, but not at all to be like the experience of the singularity in that all of humanity is suddenly arrested and deposed by autonomous AI.
An interesting scenario in this vein is if a few powerful individuals can wield autonomous systems, modify themselves and simply wipe out all the competition, then in effect the rest of us wouldn't know the difference. This outcome is actually I think on the more likely side, albeit a good ways away in the future.
Less likely but still totally legitimate as a concern is the idea that AI could be very easily weaponized. This is a real problem and is I think behind the more substantive warnings by good thinkers on the topic. Like bioweapons, we might be wiped out by an machine that's been intentionally programmed and mechanically empowered to cause real harm. This kind of danger could also be emergent, in that a machine might be capable of deciding that it ought to take certain actions as well as have the capacity to take them, and then, voila, mass murder.
However it seems unlikely that such a mistake would be made, or that a bad actor would be capable to commit such an intentional crime. I think this is on par with nuclear MAD: even total madmen dictators hit the pause on the push-the-button instinct. And an AI MAD or similar would surely take as much resource to produce as a nuke arsenal. In other words, the resources required to build such a machinery are on the order of a nation-state, and perhaps more complicated to achieve than a nuclear arsenal, so probably more likely to be stopped or fail in-process rather than succeed.
So there are dangers from AI but I would say they are lesser than the accumulated danger of industrial society rendering they planet uninhabitable, which should of course occupy our primary concern these days.
The idea that the biological evolutionary 'machine' whose motive for existence is accumulated over billions of years of entropic adaptation can be out engineered, or accidently replicated by modern computational AI is silly - the two aren't in the same league and it's hubris to suppose otherwise. There's more intelligence in the toe of a lady bug than in an the computing power ever made.
In sum the danger from emergent AI is overstated, however the concern is most welcome to the extent that it informs wisdom and care in consideration for our techno-industrial impact on the biosphere.
by Borrible on 11/21/21, 8:27 AM
You may call it Borrible's first tautology. Or perhaps bias. Yes, Borrible' bias sounds clever. At least to me. And that is what counts, doesn't it?
I don't really know what that fucking world really is, but nonetheless it exists.
With temporarily stable local dynamics, some parts of the world began to copy themselves. Albeit with errors and quirks. The recurring processes of the surrounding builded the mold for the debris that collects in the swirls.
Some of those copies developed representations of their surroundings. First in form of simple notes sticking on themselves, being themselves. Which was an advantage, when they bumped into another. They could navigate that thing I called world. Which made their copy process stable.
With a lot of time and try and error, some parts of those parts of parts of that thing I called world even developed some really fancy little dollhouse worlds in this part of the world that will later call itself the brain.
And the most advanced ham actors in that dollhouse put more tiny little dolls in that house, the most precious one, ego. It represented that part of the world that started the whole shebang, the body. And it equipped that tiny little dollhouse with a lot of woundrous and a lot of silly things, some animated , some not. And it took great delight in it, it even fancied itself a god and pushed the tiny little ego around doing his biddings.
But for the most part it just tried to please itself and learn about the world and itself, based on all that input it got somehow from the world. And the drama that ham actor and his Muppet Friends acted.
Exactly like all those good little boys and girls do on their playgrounds since time immemorable.
When I was young, something happend. My dolls started to become 'Little Computer People'.
And people my generation and that before developed fancy models about this Matrioshka Doll World, about Worlds in Worlds in World in Worlds. Infinite regress, sometimes recursive, sometimes not. A calaidoscopic mirror, sometimes dark, sometimes shiny.
Simulacron 1, 2, 3 and so on until there is no energetic process in that thing I call world, that can be harvested.
And every time those models became more complex, they gave more agency to that part of the world that is now mumbling about building a new Ghost in the Machine.
Apart from the possibly insurmountable practical problems, I see no reason in principle why it should become more complex in the form of artificial intelligence.
As an aside, it's great to be that part of the world, but beware. It may all end the moment that ham actor in that dollhouse cuts the strings to that world he is living of.
A risk deeply embedded in this structure. Of an agent acting in a model of the world.
The agent is subject to the risk of his striving to make himself independent of the world.