from Hacker News

You shouldn't build your career around existential risk

by vrnvu on 12/25/24, 11:21 AM with 2 comments

  • by Fricken on 12/25/24, 11:43 AM

    >I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern?

    Either AGI arrives and kills us all, or it arrives and automates all our jobs, or it doesn't arrive and Yudowsky can keep doing his career. Am I missing something?

  • by vouaobrasil on 12/25/24, 12:14 PM

    > I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern? Would anyone care about him at all?

    I think the post misses the fact that it's not "off" or "on": even if AI is not a literal existential risk, it is still an immense risk so working to stop it is still a worthy activity that can have many positive results for society.