by firechickenbird on 2/27/23, 10:44 PM with 19 comments
by notShabu on 2/28/23, 12:26 AM
This artificially ages everyone so now younger and younger people have to learn to cross reference research papers, manage tribal alliances, make bayesian inferences, deal with existential depression, navigate social hierarchies, etc...
Generative AI furthers this trend. Now individuals cannot simply BE in a reality but have to be responsible for constantly managing and curating their own realities while having a rough idea of how others are managing their own realities to work together, have relationships, etc...
I feel this is the root cause of people feeling overwhelmed. It's the responsibility of being constantly alert and ready to enforce boundaries. Like a soldier at war, fighting for the peace of a stable shared reality.
by swatcoder on 2/28/23, 12:34 AM
If so, that's not an apocalypse or the end of global communication or SaaS's or computers -- it would be an adjustment from the last 30 years of forums, blogs, social networks, influencers, and other internet folk content, where we felt like we were connecting with real people all across the world.
But based on how these latter phenomena seem to have affected mental health and physical activity, that might be a good thing. It could be that the Social Media Age was an unsustainable cultural flash and that a different kind of lifestyle experience and internet culture will come up behind.
by Minor49er on 2/27/23, 11:05 PM
https://web.archive.org/web/20180703132951/https://automated...
Not to mention that bots have been flooding inboxes, message boards, chat rooms, and communities for about as long as anyone can remember
I am also tired of the fakeness, but I think that with enough exposure, the hype will die down and we'll start noticing more patterns that tip us off as to whether or not something is real (at least in areas where it matters)
by dstala on 2/28/23, 7:33 AM
If put to right use, I am sure it will help. But I do agree, putting GPT into pretty much everything is not going to help
by muzani on 2/28/23, 2:26 AM
Just like how you now have a bad feeling when you see the headline "10 Reasons Nirvana Changed Music Forever", you'll start moving around the AI generated stuff.
by loveparade on 2/27/23, 11:58 PM
That's just going to be the new normal. It means there will be more services that rely on reputation and trust, e.g. verified accounts and credentials. This isn't great news for HN. Unless there are some big changes, I think HN is going to drown in generated content that's indistinguishable from low to medium quality human-written content.
by kelseyfrog on 2/27/23, 11:48 PM
by rchaud on 2/28/23, 2:49 AM
by brandon272 on 2/27/23, 11:28 PM
by noloblo on 2/27/23, 11:16 PM
>> But for a few thousand people, the mental health support they received wasn’t entirely human. Instead, it was augmented by robots. >> “People who saw the co-written GTP-3 responses rated them significantly higher than the ones that were written purely by a human. That was a fascinating observation,” he said.
by skytreader on 2/28/23, 12:47 AM
I make this claim with the following observations:
- As with other aspects of life, the conservative forecasts are often closer to the truth than extreme optimism/pessimism, especially in the long run; there is almost always a regression to the mean, so to speak. The technologists fervently burning with an almost religious zeal for all these AI models are making some egregious claims like "Stable Diffusion/ChatGPT IS (human?) learning", "There is no difference between a human learning from an artist's corpus and a neural net training from said corpus", on which they build extremely shaky forecasts.
- Tech and philosophy aside, there are a lot of other hurdles AI must clear for it to be the disruptive technology that it is portrayed to be. Just my least controversial example: people ask if LLMs will replace search engines anytime soon but, notably, ChatGPT can't fetch you the news, and "conversing" with it is an amusing but cumbersome interface. In the future, I think search engines and LLMs will share utility; based on my own experience using ChatGPT for programming, it is great for greenfield projects but horrible for working with pre-existing code. (Also, by saying "AI is not yet disruptive", I don't mean to imply its economic impact will be negligible---some people will definitely be affected but not the people nor in ways we think of currently. Also, my money is betting that the domain experts will be mostly safe from this impact.)
- I've seen this hype cycle before: a new conceptual/business/technological framework shows an amusing/promising use case which is then exploited to death by a wealth of startups and existing products pivoting, making this a "core" value prop. Remember when Google was a mobile-first company, so everyone prepared to be mobile-first too? When everything was social? That lead to toothbrushes and culinary products having APIs so you can build mobile apps for them. Remember VR? Blockchain? Maybe I'm too jaded but maybe I'm correct. Time will tell.
There's this tongue-in-cheek saying that advanced AI is whatever current AI can't do; in other words, researchers will not ever achieve "advanced AI" because the goal posts will keep moving. As a tech enthusiast who even did undergraduate AI research a decade ago, I really feel bad harboring such negative prognoses on progress that is, hands down, impressive. But how I like to think of it is not that AI progress is failing or underwhelming but that AI progress is helping us understand more of our humanity; when we move the goal posts, it's because we see AI (not) doing something that we consider for granted when dealing with actual humans.
by tonguetrainer on 2/28/23, 3:14 AM
I just don't see a problem with this. We are living in exciting times.
by ihucos on 2/28/23, 3:26 AM
by darth_aardvark on 2/28/23, 12:57 AM