Friday, March 31, 2023

Stop advancing 2

(cont'd from yesterday's post)

Bad outcomes which worry those tech leaders include propaganda and misinformation, a loss of valuable jobs, human beings losing control of our civilization: 

"AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

How likely is it that the request in that letter will put a 6-month pause on every AI lab pushing ahead to increase artificial intelligence? The chance that every AI lab will stop advancing is ~zero. The chance that corrupt results in some form will eventually follow is ~certain.

The letter suggests that government may have to enforce a pause in AI lab research advancement. Will that eliminate risk and prevent the bad outcomes? Again, the likelihood is ~zero.

AI is a tool. Every tool can be used for good and evil ends. What will determine whether the outcome is good or evil? Only the virtue and values of the tool-maker and the tool-user. The task for society is (as it has always been) to try to create good, virtuous individual people. 

It's hard to do.

No comments:

Post a Comment