by catskull on 11/30/24, 10:36 PM with 63 comments
by gkoberger on 11/30/24, 11:21 PM
by mattmaroon on 11/30/24, 11:19 PM
by nextworddev on 11/30/24, 11:39 PM
by ozim on 12/1/24, 1:02 AM
You cannot equate paper clip maximizer with SV companies. Because fitness functions are different.
Not saying that diamonds industry didn’t make world worse off but still somehow it didn’t take over the world to make people eat diamonds for breakfast. So the same there will be no entity that can make all people everywhere eat strawberries for breakfast all year long.
Scary part is finance industry that basically already is self conscious with all the rules baked in and no single person being able to grasp it.
Finance with AGI could already become paper clip optimizer - but it actually needs energy only. It doesn’t need humans anymore. So it would most likely fill in whole world with power plants and erase all other life just to have the electricity.
by jackschultz on 11/30/24, 11:25 PM
Excellent quote. We say we're in a rational world where we make rational decisions in our societal game we've all been told we have to play - capitalism where money is the determinant of success vs failure for corporations, families, individuals.
But step back and when looking at the question of whether it's rational to us as humans be playing this game and it is not rational at all. Why are we not deciding that food and places to live for everyone is the determinant of success of a country, society? Or happiness?
60 Minutes has a segment about Bhutan from a couple weeks ago [0] about this. They lived by something they named "Gross National Happiness". Which feels weird to type but again stepping back, it's because our whole lives we're told that "Gross Domestic Product", overall money, is the determinant of "best" and that's so engrained for us.
On a different note, Ted Chaing's short story books [1][2] are incredibly, incredibly good. I'm reading them again and read "Story of Your Life" earlier today. Being able to write fiction like that makes it much more trusting to listen to what someone has to say on other topics. And saying that seems like another topic - how we're told to downplay fiction compared to non-fiction, when our brains evolved for stories. But that's for another comment.
[0] - https://www.youtube.com/watch?v=7g_t1lzn-1A
[1] - https://en.wikipedia.org/wiki/Stories_of_Your_Life_and_Other...
by sourcepluck on 12/1/24, 12:22 AM
by fragmede on 11/30/24, 10:58 PM
I wonder if he's seen the latest videos of staged demos where humanoid robots can fold clothes
edit: didn't say 2017 when I commented.
by efitz on 12/1/24, 2:49 AM
Or more precisely, it is that we allow a group of people, under the control of a single person or very small number of people, amass incredible power, and do so in an amoral framework in blind service to a goal of stock price growth.
The problem is a scale problem in my mind. If you limit the scale then all the other problems become manageable.
We wouldn’t need antitrust laws anymore if our tax laws made it unprofitable to own shares in a company with, for example, 50% of search market share or online retailing share.
by jprete on 11/30/24, 11:18 PM
But, I think, it's the act of trying to optimize on a metric itself that is the source of the destruction. Unmeasurable human values can't survive an optimization process focused on measurable ones.
by JKCalhoun on 11/30/24, 11:16 PM
And here I was going to suggest that billionaires, unbridled mega-corporations were the fundamental risk to the existence of human civilization.
> Musk gave an example of an artificial intelligence that’s given the task of picking strawberries.
Also odd since it's more likely that a corporation, in the name of maximizing profits, would make decisions that threaten humanity. We can start with Bhopal, India. If you find fault with that example I am sure there are plenty of others, some probably a good deal more subtle, that others can suggest.
Me, not worried at all about AI.
by kortilla on 12/1/24, 12:30 AM
Humans with morals are still very much in the decision chain and there is obviously a lot of debate about their morals, but them being there makes such a vast difference that the comparison to the strawberry AI is completely invalid. The strawberry AI isn’t even considering humans.
The article then builds on that false comparison for the rest of the article so there isn’t much to gain from the rest of it.
You can make the same lazy comparison to a completely socialist, centralized decision making by a government optimizing for a single metric (voter approvals, poverty levels, whatever). It has nothing to do with capitalism or the economic system.
TLDR; article says mega corps are the same as dangerous AI because they make optimizations in favor of profit that some people disagree with.
by fossuser on 12/1/24, 12:37 AM