Admin Posted June 11, 2022 Posted June 11, 2022 "Ethereum creator Vitalik Buterin believes that unfriendly artificial intelligence poses the biggest risk to humanity..." reports a recent article from Benzinga: [In a tweet] Buterin shared a paper by AI theorist and writer Eliezer Yudkowsky that made a case for why the current research community isn't doing enough to prevent a potential future catastrophe at the hands of artificially generate intelligence. [The paper's title? "AGI Ruin: A List of Lethalities."] When one of Buterin's Twitter followers suggested that World War 3 is likely a bigger risk at the moment, the Ethereum co-founder disagreed. "Nah, WW3 may kill 1-2b (mostly from food supply chain disruption) if it's really bad, it won't kill off humanity. A bad AI could truly kill off humanity for good." Read more of this story at Slashdot. View the full article Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.