Wednesday, May 3, 2023

Fear of AI 3

(cont'd from yesterday's post)

You may wonder if mainstream, credible people take Yudkowsky's fear of runaway artificial intelligence seriously. Apparently some really do. His strong argument for taking steps to shut it down altogether draws the attention of thoughtful people.

Writing for Bloomberg, historian Niall Ferguson takes it seriously. 

Nuclear weapons and biological warfare are two extreme dangers that the world has dealt with in our recent past, so he thinks maybe we could treat AI like those threats. All governments and private companies would agree (as in the nuclear non-proliferation agreements) to stop pursuing AI research. Umm . . highly unlikely.

Even AI itself predicts undesirable results. When asked what negatives may come from large language models, GPT-4 gave two answers: 1) inauthentic, non-creative written works, and 2) fake news, propaganda, misinformation.

from Bloomberg

(cont'd tomorrow)

No comments:

Post a Comment