(cont'd from yesterday's post)
Bad outcomes which worry those tech leaders include propaganda and misinformation, a loss of valuable jobs, human beings losing control of our civilization:
How likely is it that the request in that letter will put a 6-month pause on every AI lab pushing ahead to increase artificial intelligence? The chance that every AI lab will stop advancing is ~zero. The chance that corrupt results in some form will eventually follow is ~certain.
The letter suggests that government may have to enforce a pause in AI lab research advancement. Will that eliminate risk and prevent the bad outcomes? Again, the likelihood is ~zero.
AI is a tool. Every tool can be used for good and evil ends. What will determine whether the outcome is good or evil? Only the virtue and values of the tool-maker and the tool-user. The task for society is (as it has always been) to try to create good, virtuous individual people.
It's hard to do.