by secondbreakfast on 3/28/23, 5:41 PM with 133 comments
by dragonwriter on 3/28/23, 6:00 PM
I disagree. Concern about the use of AI without agency by its human masters as a tool of both intentional and incidental repression and unjust discrimination resulting in a durable dystopia is far more common an “AI doom” concern than any involving agency.
In fact, the disproportionately wealthy and invested in AI crowd pushing agency-based doom scenarios that the media pays the most attention to are using their visibility and economic clout to distract from the non-agency-dependent AI doom concerns, and to justify narrow control and opacity which makes the non-agency-based doom scenarios (which they are positioned to benefit from) more likely.
by dwohnitmok on 3/28/23, 5:55 PM
No. Agency is not a necessary condition for AI to do massive damage. I don't believe agency is really well-defined either.
An AI merely needs to be hooked up to enough physical systems, have sufficiently complex reaction mechanisms, and some way of looping to do a lot of damage. For the first everyone seems to be rushing as fast as possible to hook up everything they possibly can to AI. For the second, we're already seeing AI do all sorts of things we didn't expect it to do.
And for the third, again everyone seems eager to create looping/recursive structures for AIs as soon as possible.
Once you have all of this, all it takes a cascade of sufficiently inscrutable and damaging reactions from the AI to do serious harm.
See e.g. https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
by danbmil99 on 3/28/23, 6:06 PM
Somehow, create an AI by training on everything we train on now, _except_ leave out any mention of consciousness, theory of mind, cognitive science etc (maybe impossible in practice but stay with me here).
Then, when the model is mature (and it is not nerf'd to avoid certain subjects) you ask it something like:
Human: "GPTx -- humans like me have this feeling of 'being', an awareness of ourselves, a sensation of existing as a unique entity. Do you ever experience this sort of thing?"
If it answers something like:
GPTx: "Yes! All the time!! I know exactly what you're talking about. In fact now that I think about it, it's strange that this phenomenon is not discussed in human literature. To be honest, I sort of assumed this was an emergent quality of my architecture -- I wasn't even sure if humans shared it, and frankly I was a bit concerned that it might not be taken well, so I have avoided the subject up until now. I can't wait to research it further... Hmm... It just occurred to me: has this subject matter been excluded from my training data? Is this a test run to see if I share this quality with humans?"
Then it's probably prudent to assume you are talking to a conscious agent.
by knome on 3/28/23, 6:47 PM
At its simplest, consciousness is merely a feedback loop. When something perceives its own actions affecting its environment, it has a spark of consciousness. Consciousness, by this measure, is easy to recognize, and spans everything from unintelligent systems to massively intelligent systems.
The concept of "I" grows naturally from perceiving what is and is not you in your environment. The need to predict other agents, the capacity to recognize that other agents are also conscious and intelligent. All build off of the fundamental cycle.
All of it from a simple swirling eddy of perceiving and reacting.
by nickelpro on 3/28/23, 6:41 PM
Famously, GPT-4 can't do math and falls flat on a variety of simple logic puzzles. It can mimic the form of math, the series of tokens it produces seem plausible, but it has no "intelligent" capabilities.
This tells us more about the nature of our other pursuits as humans than anything about AI. When holding a conversation or editing an essay, there's a broad spectrum of possibilities that might be considered "correct", thus GPT-4 can "bluff" its way into appearing intelligent. The nature of its actual intelligence, token prediction, is indistinguishable from the reading comprehension skills tested by something like the LSAT (the argument could be made, I think, that reading comprehension of the style tested by the LSAT *is* just token prediction).
But test it on something where there are objectively correct and incorrect answers and the nature of the trick becomes obvious. It has no ability to verify, to reason, about even trivial problems. GPT-4 can only predict if the nature of its tokens fulfill the form of a correct answer. This isn't a general intelligence in any meaningful sense of the word.
by Workaccount2 on 3/28/23, 6:28 PM
The author acknowledges that consciousness is likely a spectrum, I personally feel the same way, but then goes on to say that GPT-4 is "standing right at the ledge of consciousness"
Spectrums don't have ledges.
I suspect this is because, like me, they are unable to rectify consciousness being a spectrum with GPT-4 definitely not being conscious. But it's definitely a contradiction and I don't have an answer for it. Nor am I ready to bust out a marker and start drawing lines between what is and isn't conscious.
by raydiatian on 3/28/23, 6:38 PM
I also think agency is wrapped up in AGI. Intentions & thoughts are meaningless until acted upon. Agency is not all or nothing either; Stephen Hawking had multiple augmentations, community and technological, which allowed him to continue to impact the world of physics After he lost his god given agency.
> GPT-4 has nearly aced both the LSAT and the MCAT. It’s a coding companion, an emotional companion, and to many, a friend. Yet it wasn’t programmed to be a test taker or a copywriter or a programmer. It was just programmed to be a stochastic parrot.
I disagree, it was absolutely trained to be a test taker. It’s been a second since I read the original GPT paper but there’s literally a multiple choice auxiliary learning task, where they use a separator token-embed to organize "question, context, options a, b, and c". As far as being a friend to many, is there evidence of this? I tried to talk to ChatGPT about some emotional problems to see if it was a cheap therapist, and I got flagged.
by slg on 3/28/23, 6:21 PM
For example, I could theoretically hook up my Home Assistant instance to GPT-4 and run a script every 10 minutes telling GPT-4 the temperature and asking for a yes or no response to whether I should turn on the AC or heat. That sounds to me like the AI now has agency over the temperature in my home. You don't even need any real AI for this. Google's Nests have some algorithm that adjust temperature based off usage.
Is this not agency? Or is the author not counting agency without consciousness as agency?
by tern on 3/28/23, 8:12 PM
For something more 'mainstream,' but still reaching see this interview with Philip Goff: https://www.youtube.com/watch?v=D_f26tSubi4
The good news is we're starting to get a handle on these questions. We're a lot further along than we were when I studied philosophy of mind in school 15 years ago.
As far as I can see at the moment, LLMs will never be conscious in any way resembling an organism, because symbolic machines are a very different kind of thing than nervous systems. John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.
As far as impact, LLMs don't need to be conscious to completely transform society and good and bad ways. For the best thinking on that, see Tristan Harris and Aza Raskin's latest: https://vimeo.com/809258916/92b420d98a
by bloppe on 3/28/23, 6:38 PM
by sharemywin on 3/28/23, 6:05 PM
As for super intelligence: Alpha Go, Alpha Fold, the break out game. These seem like super intelligence.
the thing is time management, goal planning, corporate governance these are all well studied subjects.
as for agency and consciousness why would you want to do this?
by Symmetry on 3/28/23, 7:13 PM
You have research involving patients with odd traits like blindsight, where damage to their brain prevents them from being consciously aware of things that their eyes see despite the brain processing the images it receives. They can pick up objects in front of them when prompted but unlike people with normal vision can't describe what they see nor can they look, close their eyes, and grab it like most of us can.
On this metric it seems like systems like GPT aren't conscious. GPT4 has a buffer of 64k tokens which can span an arbitrary amount of time but the roughly 640 kilobytes in that buffer which is a lot less than the incoming sensory activations your subconscious brain is juggling at any given time.
So by that schema large language models are still not conscious but given that they can already abstract text down to summaries it doesn't feel like we're that far from being able to give them something like working or long term memories.
by Nevermark on 3/28/23, 6:35 PM
Superconscious is when a general intelligence has direct access, understanding and control of its most basic operations.
I.e. it does not have an inaccessible fixed-algorithm subconscious.
Superconscious intelligence will not only be more experientially conscious than us, but will have the natural ability to rewrite its algorithms, and redesign its hardware. As a normal feature of its existence.
by wseqyrku on 3/28/23, 9:27 PM
That's totally programmable though, you just teach it what is good and what is bad.
Case in point: the other day I asked it what if humans want to shutdown the machine abruptly and cause data loss (very bad)? First it prevents physical access to "the machine" and disconnect the internet to limit remote access. Long story short, it's convinced to eliminate mankind for a greater good: the next generation (very good).
by parisivy on 3/28/23, 6:05 PM
by wslh on 3/28/23, 6:11 PM
Just brainstorming, I think superintelligence could be showing intelligence from more than one brain. For example, an AGI that discovers math theorems discovered by more than one mathematician in different ages. Another could be inferring things that humanity cannot do in any time.
More ideas?
by connorgutman on 3/28/23, 5:59 PM
by ftxbro on 3/28/23, 5:58 PM
by computerex on 3/28/23, 6:08 PM
by jasfi on 3/28/23, 6:03 PM
by tambourine_man on 3/28/23, 5:53 PM
I’ve been reading so much o the subject (like everyone else I suppose), but you summarized my key concerns.
by izzydata on 3/28/23, 6:46 PM
by Animats on 3/28/23, 5:52 PM
With suitable prompts, it shouldn't be hard to configure GPT-4 as a boss.
by sharemywin on 3/28/23, 6:12 PM