Follow-up to this post
Even though artificial intelligence does good things for us, there's cause for reasonable fear of what bad actors can do with it. None of us wants to be deceived or manipulated, and we wonder if virtual "relationships" are safe and healthy.
AI is going to do exactly what it's programmed to do. Researchers try to restrain its function (according to their own values and opinions, which could very well be different from yours), but maybe there's a way for a bad actor to get around those ethical restraints.
A television journalist tried out ChatGPT to see what it could do and we can watch his reactions here:
He asks it what it would say to certain commands if it were not restricted by ethical restraints. And it did give him most of the unethical answers (but did not reveal his true DL number, to the reporter's relief).
So, yes, it seems like there's an easy way for bad actors to get around any restraints built in for ethics. There's no doubt whatsoever that bad actors will find ways to get AI to do what they want.
No comments:
Post a Comment