(cont'd from yesterday's post)
Yudkowsky (here are his credentials and he doesn't seem to be a wacko) published a strong, extreme warning in March:
"[T]he most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
He explains plainly why he's so worried :
"Progress in AI capabilities is running vastly, vastly ahead of . . progress in understanding what the hell is going on inside those systems."
He makes a really persuasive case for his conclusion, and I recommend that you read it. His conclusion and recommendation to all the nations of the earth is to:
"Shut it all down."
(cont'd tomorrow)
No comments:
Post a Comment