Friday, May 5, 2023

Fear of AI 5

(cont'd from yesterday's post)

There's no doubt AI can be used positively in many ways, like in healthcare diagnosis.  If you watched Khan's tutor video yesterday, you know the tutor bot looks helpful and very convincing. It's easy to assume the AI tutor is thinking. But actually, AI doesn't understand anything. It just surveys relevant internet content and arranges it per its algorithm instructions. When a student asks it for ethics advice, he will get the opinion/algorithm designed by the AI tutor's programmer at Khan Academy. May be good, may not be good.

And that's why it's dangerous. It sounds very much like us (that's intentional), but it's not a person. It doesn't understand the difference between true and false. It "makes stuff up," as the MIT article says.

That's why Hinton is worried: “It is hard to see how you can prevent the bad actors from using it for bad things . . I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

How would bad actors use it for bad things? It's a perfect tool for deception, for fake news, for manipulation, for corrupt politicians. Will we get to the point where we trust nobody?

No comments:

Post a Comment