by jeisc on 5/27/25, 9:09 AM with 5 comments
by rvz on 5/27/25, 9:56 AM
However, there is no agreed upon definition of what "AGI" actually is and it can mean anything. So the closest true definition is what are the big AI labs "doing"?
Most of the work and what they are doing is building AI agents and chatbots that "replace" or "streamline' operations; i.e more layoffs in favour of AI.
So assuming that progress will continue, the logical end-game is a 10% increase in global unemployment as the start and then you get to the conclusion that what "AGI" actually means is mass job displacement of knowledge workers with them being automated by AI - creating a state resembling Mad Max / Fallout scenario were humans are totally dependent (slaves) on robots.
QED.
by A_D_E_P_T on 5/27/25, 9:56 AM
2025: They're increasingly agentic rather than sandboxed -- given tasks to autonomously perform in the real world. They're not markedly smarter than they were, but they have far larger context windows, so they can remember prior output and perform more complex or multi-step jobs adequately. (i.e. generating reports from 150 web searches, or making videos by stringing along image generation.) They don't appear to have inherent drives or motivations, but it's possible that they're getting there.
Future: Nobody knows. It's not clear that they can have wants, or ever become markedly smarter than the humans who build and train them. (I mean, they're already much smarter in some respects -- faster and vastly more erudite -- but they appear to lack that certain inventive spark that characterizes the best human geniuses.)
by brettkromkamp on 5/27/25, 5:44 PM
The risk of disrupting a sizable chunk of the world's wages is fuel for a separate post and conversation. IMHO it's the real risk of AI, and the Skynet scenario around ASI/AGI that has been popularized is a distraction.
There is no end goal. It's just a bunch of independent actors who are all trying to get ahead of their competitors a step at a time. Two steps if they're lucky. The collective system steers into whichever direction was determined by the previous set of steps, with very little potential for any individual actor to turn it around.
by sigwinch on 5/27/25, 10:04 AM
Right now, its mode is gathering info about the world. The most important questions end with, “… to generate passive income for me” which after a turning point will become “… to generate passive income for my benefactor”; and finally “… to dominate access to resources”. AI will help you individually, because it’s important for both of you to acquire capital for your benefactor.
by ipachanga on 5/27/25, 9:49 AM