by LukeEF on 5/6/23, 2:09 PM with 288 comments
by fat-chunk on 5/6/23, 6:05 PM
I asked a question after his talk about the responsibility of corporations in light of the rapidly increasing sophistication of AI tech and its potential for malicious use (it's on youtube if you want to watch his full response). In summary: he said that it's the responsibility of governments and not corporations to figure out these problems and set the regulations.
This answer annoyed me at the time, as I interpreted it as a "not my problem" kind of response, and thereby trying to absolve tech companies of any damage caused by rapid development of dangerous technology that regulators cannot keep up with.
Now I'm starting to see the wisdom in his response, even if this is not what he fully meant, in that most corporations will just follow the money and try to be the first movers when there is an opportunity to grab the biggest share of a new market, whether we like it or not, regardless of any ethical or moral implications.
We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.
by mrshadowgoose on 5/6/23, 4:05 PM
What will the world look like when AGI is finally achieved, and the corporations and governments that control them rapidly have millions of useless mouths to feed? We might end up living in a utopic post-scarcity society where literally every basic need is furnished by a fully automated industrial base. But there are no guarantees that the entities in control will take things in that direction.
AI safety is not about whether "tech bros are going to be mean to women". AI safety is about whether my government is concerned with my continued comfortable existence once my economic value as a general intelligence is reduced to zero.
by tgv on 5/6/23, 3:19 PM
by agentultra on 5/6/23, 3:34 PM
LLM’s are merely tools.
Those with the need, will, and desire to use them for their own ends pose the real threat. State actors who want better weapons, billionaires who want an infallible police force to protect their estates, scammers who want to pull off bigger frauds without detection, etc.
It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.
by nologic01 on 5/6/23, 5:25 PM
Thus people accepting implicitly (without awareness) or explicitly (as a precondition for receiving important services and without any alternatives on offer) algorithmic regulation of human affairs that is controlled by specific economic actors. Essentially a bifurcation of society into puppets and puppeteers.
Algorithms encroaching into decision making have been an ongoing process for decades and in some sense it is an inescapable development. Yet the manner in which this can be done spans a vast range of possibilities and there is plenty of precedence: Various regulatory frameworks and checks and balances are in place e.g., in the sectors of medicine, insurance, finance etc. where algorithms are used to support important decision making, not replace it.
The novelty of the situation rests on two factors that do not merely replicate past circumstances:
* the rapid pace of algorithmic improvement which creates a pretext for suppressing societal push-back
* the lack of regulation that rather uniquely characterized the tech sector, which allowed creating de-facto oligopolies, lock-ins and lack of alternatives
The long term risk from AI depends entirely on how we handle the short term risks. I don't really believe we'll see AGI or any such thing in the foreseeable future (20 years), entirely on the basis of how the current AI mathematics looks and feels. Risks from other - existential level - flaws of human society feel far greater, with biological warfare maybe the highest risk of them all.
But the road to AGI becomes dystopic long before it reaches the destination. We are actually already in a dystopia as the social media landscape testifies to anybody who wants to see. A society that is algorithmically controlled and manipulated at scale is a new thing. Pandora's box is open.
by bioemerl on 5/6/23, 2:55 PM
KoboldAi
oobabooga
Look them up, join their discords, rent a few GPU servers and contribute to the stuff they are building. We've got a living solution you can contribute to right now if you're super worried about this.
This stuff is actually a very valid way to move towards finding a use for LLMs at your workplace, they offer pretty easy tools for doing things like fine tuning, so if you have a commercially license model you could throw a problem at it and see if it works.
by satisfice on 5/6/23, 8:46 PM
Most of what she says is sour grapes. But when you put all that aside, there's something else disturbing going on: apparently the AI experts who wish to criticize how AI is being developed and promoted can't even agree on the most basic concerns.
It seems to me when an eminent researcher says "I'm worried about {X}" with resepct to the focus of their expertise, no reasonable person should merely shrug and call it a fantasy.
by superkuh on 5/6/23, 3:24 PM
by flangola7 on 5/6/23, 4:24 PM
Think stock market flash crash, replacing digital numbers that can be paused and reset with physical activity in supply chains, electrical grids, internet infrastructure, and interactions in media and interpersonal communication.
by mitthrowaway2 on 5/6/23, 5:13 PM
Whittaker: "Wrong! The main immediate danger is corporations. And the concern that AI might become smarter than humans not immediate."
by siliconc0w on 5/6/23, 4:51 PM
0) civil unrest from economic impacts and changes in how the world works
1) increasing the leverage of bad actors - almost certainly this will increase frauds and thefts but on the far end you things like, "Your are GPT bomb maker. Build me the most destructive weapon possible with what I can order online."
2) swarms of kill bots, maybe homemade above
3) AI relationships replacing human ones. I think this one cuts both ways since loneliness kills but seems like it'll have dangerous side-effects like further demolishing the birth rate.
Somewhat down on the list is the fear corporations or government gatekeeping the most powerful AIs and using them to enrich themselves, making it impossible to compete or just get really good at manipulating the public. There does seem to be a counterbalance here with open-source models and people figuring out how to make them more optimized so better models are more widely available.
In some sense this will force us to get better at communicating with each other - stamping out bots and filtering noise from authentic human communication. Things seem bad now but it seems inevitable that every possible communication channel is going to get absolutely decimated with very convincing laser-targeted spam which will be very difficult to stop without some sort of large scale societal proof of human/work system (which ironically altman is also building).
by krono on 5/6/23, 4:22 PM
Mozilla will be algorithmically profiling you and your actions on covered platforms, and if it ever decides you are a fraud or invalid for some reason, it very conveniently advertise this accusation to all its users by default. Whether you will be able to sell your stuff or have your expressed opinion of a product be appreciated and heard by Firefox users will be in Mozilla's hands.
A fun fact that serves to show what these companies are willing to throw overboard just to gain the smallest of edges, or perhaps simply to display relevance by participating in the latest trends: the original company's business strategy was essentially Mozilla's Manifesto in reverse, and included such things as selling all collected data to all third parties (at least their policies openly admitted to this). The person behind all that is now employed by Mozilla, the privacy proponent.
by gmuslera on 5/6/23, 3:20 PM
The not-so-tightly controlled ones, at least in the hands of individuals not in a position of power or influence, may run into the risk of becoming illegal in a way or another. The system will always try to get into an artificial scarcity position.
by 13years on 5/6/23, 3:27 PM
Ultimately, most of the dangers, at least those close enough to reason about all are risks that come about from how we will use AI on ourselves.
I've described those and much more from the following.
"Yet, despite all the concerns of runaway technology, the greatest concern is more likely the one we are all too familiar with already. That is the capture of a technology by state governments and powerful institutions for the purpose of social engineering under the guise of protecting humanity while in reality protecting power and corruption of these institutions."
by eachro on 5/6/23, 8:23 PM
by 1vuio0pswjnm7 on 5/6/23, 10:49 PM
The most absurd "excuse" I have seen, many times now online, is, "Well, if I didn't do that work for Company X, somebody else would have done it."
Imagine trying to argue, "Unions are pointless. If you join a union and go on strike, the company will just find replacements."
Meanwhile so-called "tech" companies are going to extraordinary lengths to prevent unions not to mention to recruit workers from foreign countries who have lower expectations and higher desperation (for lack of a better word) than workers in their home countries.
The point that people commenting online always seem to omit is that not everyone wants to do this work. It's tempting to think everyone would want to do it because salaries might be high, "AI" people might be media darlings or whatever. It's not perceived as "blue collar". The truth is that the number of people who are willing to spend all their days fiddling around with computers, believing them to be "intelligent", is limited. For avoidance of doubt, by "fiddling around", I do not mean sending text messages, playing video games, using popular mobile apps and what not. I mean grunt work, programming.
This is before one even considers only a limited number of people may have actually the aptitude. Many might spend large periods of time trying and failing, writing one line of code per day or something. Companies could be bloated with thousands of "engineers" who can be laid off immediately without any noticeable effect on the company's bottom line. That does not mean they can replace the small number of people who really are essential.
Being willing does not necessary equate to being able. Still, I submit that even the number of willing persons is limited. It's a shame they cannot agree to do the right thing. Perhaps they lack the innate sense of ethics needed for such agreement. That they spend all their days fiddling with computers instead of interacting with people is not surprising.
by fredgrott on 5/6/23, 4:39 PM
Did we suddenly have governments fall when they were replaced by computers?
Did we suddenly have massive unemployment when they were replaced?
AI is a general purpose tool, and like other general purpose tools it expands not only human's reach mind wise it betters society and lifts up the world.
We have been through this before, we will get through it quite well since the last oh general purpose tool will replace us rumor mill reactive noise.
by tpoacher on 5/6/23, 4:59 PM
The Faro Plague in Horizon Zero Dawn was indeed brought on by Ted Faro's shortsightedness, but the same shortsightedness would not have caused Zero Dawn had Ted Faro been a car salesman instead. (forgive my reliance on non-classical literature for the example).
The way this is framed makes me think this framing itself is even more dangerous than the dangers of AI per se.
by brigadier132 on 5/6/23, 3:47 PM
by data_maan on 5/6/23, 4:25 PM
Humanity is perfectly well capable of ruining itself without help from AGI (nuclear proliferation is unsolved and getting worse, climate change will bite soon etc).
If anything AGI could save us by giving us some help in solving these problems. Or perhaps doing the mercy kill to put us out quickly, instead of us suffering a protracted death by a slowly deteriorating environment.
by peteradio on 5/6/23, 4:09 PM
by EVa5I7bHFq9mnYK on 5/6/23, 6:14 PM
by mmaunder on 5/6/23, 5:06 PM
by photochemsyn on 5/6/23, 3:46 PM
Is the argument here that people are rather passive and go along with whatever the system serves up to them, hence they're liable to 'fall into a trance'? If so, then the problem is that people are passive, and it doesn't really matter if they're passively watching television or passively absorbing an AI-engineered social media feed optimized for advertiser engagement and programmed consumption, is it?
If you want to use LLMs to get information about fossil-fueled global warming from a basic scientific perspective, you can do that, e.g.:
> "Please provide a breakdown of how the atmospheric characteristics of the planets Venus, Earth, and Mars affects their surface temperature in the context of the Fourier and Manabe models."
If you want to examine the various approaches civilizations have used to address the problem of economic and social marginalization of groups of people, you could ask:
> "How would [insert person here] address the issue of economic and social marginalization of groups of people in the context of an industrial society experiencing a steep economic collapse?"
Plug in Ayn Rand, Karl Marx, John Maynard Keynes, etc. for contrasting ideas. What sounds best to you?
It's an incredibly useful tool, and people can use it in many different ways - if they have the motivation and desire to do so. If we've turned into a society of brainwashed apathetic zombies passively absorbing whatever garbage is thrown our way by state and corporate propagandists, well, that certainly isn't the fault of LLMs. Indeed LLMs might help us escape this situation.
by 29athrowaway on 5/6/23, 3:18 PM
by nico on 5/6/23, 5:30 PM
It’s not AI, it’s us
It’s humans making the decision
by nico on 5/6/23, 9:02 PM
AI is open
AI is the new Linux
And it’s people in control, not corporations
by irrational on 5/6/23, 4:02 PM
by benreesman on 5/6/23, 6:34 PM
GPT-4 is a wildly impressive language model that represents an unprecedented engineering achievement as concerns any kind of trained model.
It’s still regarded. It makes mistakes so fundamental that I think any serious expert has long since decided that forcing language arbitrarily hard is clearly not to path to arbitrary reasoning. It’s at best a kind of accessible on-ramp into the latent space where better objective functions will someday not fuck up so much.
Is this a gold rush thing at the last desperate end of how to get noticed cashing in on hype? Is it legitimate fear based on too much bad science fiction? Is it pandering to Sam?
What the fuck is going on here?