Enthusiasm is running high over ChatGPT (and other bots) in the fields of education, business, health care, everywhere. It can generate text on any subject you instruct it to and it will seem like a human being wrote it.
For example: if you want to persuade your city council to implement a certain program, you could tell ChatGPT to summarize the good results that were attained when this program was used in certain cities. You could give council members the summary and save yourself a lot of work.
But it's widely understood that the AI program may give false information, while at the same time sounding just like a person, maybe a friend, who is trying to give you true information. So you're inclined to trust it. That could be a serious mistake.
Clearly, it's a perfect tool for someone who wants to mislead or misinform. We're going to have to learn how to discern true information from false. A lot of people are ready to believe whatever the generated text (ex: Khan's AI counsellor/tutor) tells them.
(cont'd tomorrow)
No comments:
Post a Comment