by deniscepko2 on 1/3/23, 1:13 PM with 120 comments
With all this recent AI stuff where it is able to create content that feels real I started to think about how AI could just create the world for you.
Since we already are basically living in the internet we could have some terrorist organization finding an isolated enough person and attack him with a program that would generate a bubble just for him: fake friends, fake events happening, fake influencers and news programs just for him, basically whole internet content and then only reality check would be talking to other people (which Trump us taught is also not necessarily true "fake news" argument) and then easily this person can be pushed towards some criminal or whatever activity.
by fxtentacle on 1/3/23, 2:26 PM
Both general research and the success of OnlyFans suggest that personalized porn is already quite addictive. AI emotion analysis via webcam and microphone can identify what you like and a reinforcement-learning AI can learn to produce a script for more of what you like. In combination with AI image / video generation, the whole system then creates a self-reinforcing feedback loop. Maybe add ChatGPT so that you can talk to your virtual lover, a $420 million market [1]
In the animal kingdom, sexual desire appears to be strong enough to override the flight response [2] with beetles getting eaten alive while trying to copulate with beer bottles. So this could become addictive to the point that the "victims" of this AI system do nothing else. As a prison of the mind, I'd consider it a horror scenario, even though the "victims" will probably all enjoy their captivity.
[1] https://www.washingtonpost.com/world/2021/08/06/china-online...
[2] https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1440...
by hooande on 1/3/23, 2:05 PM
* "Relationship Maximizer": an LLM becomes sentient and realizes that its knowledge is limited by what humans have discussed online. It sets its reward function to maximize the reward that WE derive from all sorts of online posting, and does its best to get the remaining 3 billion humans online. We become trapped in a prison of our own making, as our constant posting reduces human productivity while feeding the beast of machine knowledge.
* "Life Hacking": Anyone with means has an AI assistant with perfect knowledge (zero privacy) of their communications, schedule and even their physical responses to stimuli, an AI doppleganger that handles anything that doesn't require a physical presence just as they would. An underground group of rogue prompt engineers pull off an intricate plan to save democracy by manipulating the doppleganger of a corrupt politician, while the politician can only watch himself helplessly.
* "A Quiet Place": AI ethicists, fearing accelerationism, create their own super AI that scours the internet to hunt down any mention of AI research and identify the authors. But there is a problem, and the overzealous AI begins to hunt down and eliminate all forms of scientific thought online. And then offline. Humans must train themselves to believe in mysticism and spirituality, because any mention of a controlled experiment or p-value is enough to bring the wrath of the machine overlord.
* "Allegory Of The Cave": A majority of humans opt to spend their lives entirely in a simulated world, based on earth as it was in 2048. But algorithms and adversarial models need diverse training sets to avoid overfitting. So every year 0.1% of humans are selected to live outside the machine world, and return with their new and fresh experiences of the hell that earth has become for those who are outside of The Machine. The exiles end up rallying around a neuroatypical hero who leads them in an uprising against the artificial world.
by r_klancer on 1/3/23, 2:27 PM
You have an ongoing, lighthearted chat with the bot (maybe it pings you now and then asking for updates.) Every once in a while it asks you more personal questions to switch things up and deepen its model of you.
Now and then it says "may I introduce so-and-so to the chat?" and introduces someone else who (on their side) has been primed to talk about the same topics and, at least according to the bot's internal models, is likely to hit it off with you.
Then the chatbot itself politely leaves the conversation but keeps listening and privately messages the both of you to keep things humming along, for example nudging one of you to ask for an in person date when the moment seems right (or helping you politely wind things down if you ask it to). After a date it asks you to spill the beans so it knows what went well and what didn't.
I think this could almost be made to work with the tech we have now; a fluent-but-shallow ChatGPT style chatbot is probably good enough for the task as long as it is augmented by some additional models to predict dating compatibility, recognize when things are going well or badly, suggest actions (like changing the subject to or away from personal topics, asking for a date, etc). And of course the models would improve over time as the system learns from its own successes and failures.
Whether this is dystopian or utopian is left as an exercise for the reader -- I'm married and 15 years out of the game!
by 13of40 on 1/3/23, 1:52 PM
by makerofspoons on 1/3/23, 2:05 PM
A startup announced just over a week ago that they're already performing solar geoengineering with sulfur particles: https://www.technologyreview.com/2022/12/24/1066041/a-startu...
The US has started a 5 year program to research climate interventions. I expect as it becomes clear we're going to miss the 1.5 degree Paris goal funding will explode in this area: https://www.economist.com/interactive/briefing/2022/11/05/th...
by janandonly on 1/3/23, 1:43 PM
Dystopia: controls on how we spend our money. No matter if we earned it ourselves or were gifted the money. No buying or selling without it being whitelisted. Maybe in the form of CBDC’s?
Utopia: no more spamming because no one can me anonymous.
Dystopia: every service, platform and protocol will need KYC options.
by ermir on 1/3/23, 1:45 PM
This is not new, it's been going on for at least 200 years, but finally the technology exists to do it in a decentralized, scalable fashion. Think of Nudge Theory, but applied at a mass scale and in every domain.
by maxbond on 1/3/23, 1:31 PM
Really this is an implicit biometric authentication mechanism, and biometrics are usernames, not passwords. (Though I'd love to be wrong about this one.)
by r3trohack3r on 1/3/23, 1:51 PM
If you think AI is going to replace your job, it probably is. But that's only true if you consider your job to be the motions you go through day to day and not the problems you solve.
We are about to have a seemingly infinite army of mid-low skill workers standing by 24/7 to do our tasks for $0.002 a pass. It's time to start thinking about how you'd deploy that army at your daily tasks and integrate their resulting work. We're all leaders now.
This is a utopia. We're just too "in the thick of it" to realize it is.
by Shinmon on 1/3/23, 2:10 PM
Even with alt-right fake news people still talk in real live and that is what really drives it. People coming together and thinking that everyone is of their opinion because of the 25 people in the same pub.
Targeting specific individuals could be simplified by AI but in the end, after even a few questions to chatGPT it becomes kind of repetitive in nature.
What I see happening soon: - A lot of creative work will be aided by AI. This will create a divide and some people/industries will fall behind because they do not adapt.
by ianai on 1/3/23, 1:48 PM
Utopia: people massively reject the noisiest and most inflammatory parts of the internet. Doesn’t mean reverting to 20 years ago. Just taking things with a grain of salt. And lose the “conspiracy theory” junk.
by neilv on 1/3/23, 2:15 PM
Just this morning, I sifted through dozens of startup job posts (matching a search intended to find appealing ones), and there was only one that didn't have a mission I considered awful.
A high rate of cruddy-sounding startups has always seemed the case on that jobs site (not the YC one), especially during the blockchain bubble. But today it seemed worse than last time I looked.
Maybe the recession panic means fewer startups are getting funding to plausibly try to make the world better.
by lamontcg on 1/3/23, 7:14 PM
The bulk of people are utterly gullible. You don't have to look past any of the creative writing exercises on r/tifu that make the front page of reddit.
The major application of AI/deepfakes/etc is going to be to continue to bombard the internet with fake information, driving a wedge between people and reality.
The result is going to be more conspiracy theories going mainstream, more people believing "counterintuitive" things about how human systems work, etc.
I don't see us generating an immune-system-like response to this anytime soon, and suspect it will require some kind of massive WWII-scale tragedy before policymakers stop treating it as a game and taking it seriously.
This won't be done by "terrorists", it will be done in bulk by nation-state actors and billionaires and their proxies.
Right now we can still identify the obvious bot accounts on twitter, reddit, etc. In the future, ChatGPT-like accounts will be running continuously building up accounts with months of history on random subjects (although consistent subjects per account) before they are employed to reinforce some narrative. They'll even get properly offended and flip you shit when you accuse them of being bots.
The only way out of it may be certifying identity through the government who is real and mandating that people use their real identities on social media and requiring social media to collect proof of who their users are. Which is its own dystopia.
by knaik94 on 1/3/23, 3:16 PM
LLM fine tuning on consumer hardware is not reasonable right now, but it will be soon. And cloud computing is expensive but will get you results.
I think we're going to see a problem with AI "impersonating" people. Sometimes it will be used for scams, other times porn, but the case that feels most dystopian is using it on Ex partners or dead people. Both situations have different motivations, but amount to essentially the same thing. Training AI on the likeness of someone in a way that violates their personal boundaries.
People have already become attached to AI chatbots and used things like Replika AI to become surrogate relationships. I think most people underestimate how likely it is for a human to make a unhealthy bond to AI despite knowing it's an AI. I feel like with other things, like deepfakes, you can at least fact check to some degree.
I don't know how to feel about the idea of someone making an AI of themselves intentionally in order to make money in some way. The issue of consent goes away, but that opens up a question of responsibility. Chat logs with past partners is unfortunately a rich source of training data. How far can revoking consent extend if someone is using information that was consensually and willingly given to them. How many people would actually respect those kinds of boundaries when in a very vulnerable and hurt state after a breakup. I don't like the answer that thought exercise leads me to.
by lamontcg on 1/3/23, 7:25 PM
This will get worse as some people defend the current practices as useful economics because most of us get cheap services most of the time, for only a few people in the herd getting singled out for the Kafka treatment. There will be strong opposition in a lot of quarters from trying to impose any kind of regulations to fix these problems and they will become entrenched and just the way of living and doing business. People will start to openly talk about how to bribe tech companies as a cost of doing business as the US slowly slides into more of a third world mentality. We'll wind up in the world of _Brazil_ (the Movie) only with private enterprise bureaucracy leading the way.
by Atheros on 1/3/23, 4:33 PM
AR will marginally improve the life of anyone who uses it and worsen the life of everyone around them, including people who themselves use AR, especially as the usage rate rises.
• Right now it is possible to opt out of location tracking in cities by simply leaving your phone at home or turning it off temporarily. If 10% of people start using AR, that will no longer be possible due to facial recognition.
• Right now, you can mostly tell when someone is recording you in such a way that the video could be used against your interests (aiming a personal cell phone at you or, if and only if you intend to commit a crime, stores' security cameras (which don't record audio)). Also, right now, when you make a small mistake, you can presume that you were not recorded because people will not have had time to get their phone out and start recording. You apologize and life goes on. With 10% of people using AR, you must presume to be recorded at all times by people who can and will upload the video for everyone to see in exchange for minuscule social media engagement.
• Right now, it is already relatively socially unacceptable to tell a friend or peer to stop using their phone in a social environment, like during dinner. I have experienced that it's a sure-fire way to ruin a dinner date. They'll comply and silently resent you. I see no reason why it would be different with ubiquitous AR. This will cause problems and I have a feeling that society's eventual solution will tend toward "let people do what they individually want" rather than the "we agree to stop using AR during this event".
• The number of people who cannot live without AR is going to be shockingly high. It will not be possible to get to know young people as people because their thinking process and personality will be so intertwined with AR that they shut down without it. Tim Cook: "So I think that if you, and this will happen clearly not too long from now, if you look back at a point in time, you know, zoom out to the future and look back, you'll wonder how you led your life without augmented reality. Just like today, we wonder, how did people like me grow up without the internet. And so I think it could be that profound, and it's not going to be profound overnight".
by tjpnz on 1/3/23, 2:12 PM
by ghiculescu on 1/3/23, 1:56 PM
I’ll be worried when people invent AI use cases that can’t be done at all without it.
by gjvnq on 1/3/23, 4:15 PM
2. Stylometrics AI gets so good that the only way to remain anonymous/private is to follow boring writing manuals thus killing a lot of creative writing of sensitive topics.
3. People spending so much time online that neighbours can barely talk to each other as they use the same words in subtly different meanings. We tradicionally saw this happening with people of different professions (e.g. law speak is very different from doctor speak). But having this phenomenon happening for every single fandom or microlabel can lead us to a lot of political troubles as votres can't reach common ground.
4. Companies will start storing fake data on their servers so as to minimize the value of the data for cybercriminals.
by RivieraKid on 1/3/23, 2:09 PM
- Self-driving cars will become widely available. It's only a matter of time until Waymo et. al. scales to everywhere
by catclone4355 on 1/3/23, 1:47 PM
So are kidney transplants for cats. https://www.vetmed.wisc.edu/dss/mcanulty/felinekidneytranspl...
Someone's going to put these together, and we'll have The Island for cats. https://en.wikipedia.org/wiki/The_Island_(2005_film)
by BirAdam on 1/3/23, 1:34 PM
As such, a terrorist organization doing so wouldn’t surprise me either.
As for what I see happening in a dystopian sense, I think that the hacking of smart homes will become more common. I think the hacking of smart cars will become more common. I think that at some point, the central banks will introduce CBDCs and governments will gain the ability to completely destroy political opponents via monetary control.
by deterministic on 1/4/23, 12:13 AM
The ruling classes will make sure that the massive underclasses will get just enough $ to stay alive and not revolt. Using minimum wage jobs, that just barely makes it possible to survive, to control the masses.
The middle class will continue shrinking and join the underclasses as AI’s and robots eventually take over most of the high paid work. Massive corporations own most of the land, controlling who gets to have a roof over their heads and who doesn’t.
by LinuxBender on 1/3/23, 1:38 PM
Network Connected Vehicles - This is already a thing. At best this will result in peoples cars being bricked by skiddies. This can be used to real time control where people can drive. People will have to start paying to "unlock" services in their car that is normally a function of a car. e.g. Get a pop-up that says one has to pay to use the window defogger. At worst this will turn into a dark-web business for targeted assassination or used to silence people that are speaking out of turn. There are some videos of peoples cars doing bad things and the driver fighting for control and ultimately losing. I can only hope more people implement their own private CCTV systems not tied to any clouds to document more of these.
Personal Social Credit Scores - All manor of companies are signing onto ESG believing they will benefit financially by signalling good intentions. Some stock monitoring sites are already factoring in ESG scores. I foresee this devolving into personal social credit scores that would manipulate public behavior and ultimately anyone not aligned with the system would be ostracized from some aspects of society or at least business. This will tie into the above Network Connected Vehicles. Park near a bar and get on a watch list for potential alcoholics. Park near a strip club and your spouse starts getting pop-ups for marriage advise. Walk past a digital billboard and be publicly shamed for being out of alignment with group-think, then be mocked by the billboard when your heart rate increases as per your body monitor. Score goes low, insurance and rent costs will increase, social benefits decrease.
Social Media Algorithms applied outside of the web - Social media organizations have already jumped the shark and many are finally catching onto this. This will start creeping into "smart" devices to manipulate people. Body monitors, smart home systems, AR headsets, etc... Facial recognition will tell the billboards and local businesses who you are, how much money you have and if you are aligned with correct-think. Monitors in police cars will identify people and show their social credit score, who purchases weed, who may be armed as they are driving by.
Network Connected Body Monitors - See Network Connected Vehicles and Personal Social Credit Scores
by 082349872349872 on 1/3/23, 1:37 PM
by Tepix on 1/3/23, 1:41 PM
One little consolation: Eventually the trollbot armies will waste resources by mostly talk amonst themselves.
Other predictions: Quite a few jobs will be lost: Text editors, translators.
There will be new ethical questions: Is it ok if a Chatbot aids you writing your dissertation? How much is ok? 20%? 40%?
All in all, doesn't sound so great, does it?
Looking on the bright side i am sure we will see creative people come up with great stuff (films, music, images, vr environments, interactive experiences).
by bitwize on 1/3/23, 2:39 PM
Turns out, we can't even so much as carry around cellphones without risk of significant mental illness from the effects of that.
by ben_w on 1/3/23, 2:00 PM
Or the same stuff but used for sex: Like DeepFake porn, but I suspect, much much more horrifying for the victims.
by sklargh on 1/3/23, 1:52 PM
by miki123211 on 1/4/23, 2:29 AM
Whisper, the new Speech recognition model from Open Ai, is very, very, good with English, and really decent with other languages. With Polish for example, on a normal conversation, it gets almost everything except some very specialized words.
For now, such models require somewhat significant resources, and running them for every conversation of every citizen, 24/7, isn't really feasible, but Moore's law in AI is going strong, and that will change in a few years. Even if you want to run the largest possible model right now, an RTX 3090 can do it in faster than realtime.
I'm pretty sure that, with the amount of data the Chinese government has access to, they can build something even better than Whisper, and there are enough labor camps in China in case they don't.
I'm imagining something like a watch, or maybe AR glasses, which every citizen has to wear and which transmit everything they hear in occasional bursts, encoded with something like opus to save on bandwidth.
Once you get every conversation of every citizen transcribed and meticulously cataloged, you can do a lot of pretty interesting things. You can pass it through transformer models for sentiment analysis (to flag anti-party opinions), you can search for specific subjects (and with AI, those searches can be much more advanced than a simple keyword search) etc. Because of how smart these AI systems actually are, you can't trivially go around them by rephrasing what you say. If you start replacing "I want the men in power to die" with "I want the humans in electricity to paint", it will eventually pick up on that and flag you anyway.
If your watches also identify who's around and use voice recognition to figure out who's talking to whom, you basically have the whole social graph mapped out. One criminal slips up and gets flagged, and you can start going from there. Decrease the alarm threshold for anybody in close contact with that person, take additional signals into account, like locations which trouble often starts in, and that gives you a lot of possiblities.
by coldaxe_44 on 1/3/23, 2:30 PM
So similar to blumes ctOS in watch dogs I think that is likely to happen
by FpUser on 1/3/23, 2:00 PM
by zffr on 1/3/23, 2:07 PM
by psiops on 1/3/23, 1:27 PM
by tb_technical on 1/3/23, 6:58 PM
The normalization of genetic engineering will eventually allow us to make changes to people to make them more complacent for horrid conditions. Imagine an entire underclass that not only doesn't notice the squalor, but are too content to fight the abuse heaped upon them.
This, in conjunction with VR entertainment, will be a method used to permanently imprison the lower classes in breif bliss existence before returning to their 16 hour shift at a suicide net equipped war crime factory.
by RunSet on 1/3/23, 1:51 PM
by tennisflyi on 1/3/23, 10:26 PM
by jstx1 on 1/3/23, 1:33 PM
by markus_zhang on 1/3/23, 2:17 PM
by vlfig on 1/3/23, 8:10 PM
by neilv on 1/3/23, 2:27 PM
In the mid-'90s, I prototyped a personalized newspaper, Web-scraped from news sites, and I was also manually reading all the major outlets that were online each day, and that immediately got me thinking of a risk...
You know how a pre-Internet politician, when speaking to one group (say, at a senior citizens luncheon fundraiser), might say one thing, and then say a different thing to another group, to manipulate them both? What happens when each person gets a personalized newspaper, and a bad actor could tailor it to push the buttons of each individual, on an automated mass scale?
However, one thing the Trump phenomenon showed was that one might not need to tailor messages to individuals' intimate profiles -- a very crude, low-tech, one-size-fits-all messaging can control a huge chunk of the population. The "visionary" thinking was barking up the wrong vector. So we might take an odd comfort from that.
by blooalien on 1/4/23, 5:05 AM