(cont'd from yesterday's post)
Gathering and processing data is the function of narrow, or weak, artificial intelligence. It can analyze immense amounts of data and answer questions based on data it gathers. That helps human beings reach conclusions, and it saves us from doing all the data gathering.
But it's not equal to the kind of intelligence humans have. It doesn't question the truth of the data it gathers, and it doesn't really understand what the words mean. That's why ChatGPT can write an article based on common word sequences on the internet, but it can also make big and obvious errors like this:
Human: How many bears have Russians sent into space? ChatGPT: According to estimates, about 49 bears have been sent into space by Russia since 1957
Humans will continue to improve ChatGPT and other "large language models" like it. But this college professor is cautious:
"My fear is that people will be so bedazzled by articulate LLMs that they trust computers to make decisions that have important consequences."
from Mind Matters
No comments:
Post a Comment