(cont'd from yesterday's post)
Lennox (yesterday's video) says that, though we use it now for many impressive beneficial outcomes, narrow AI is like a sharp knife which can be used for life-saving surgery or potentially for murder because the ethics of a narrow AI program (such as autonomous driving) will reflect the ethics of the human being who programmed it. Example: what will the car target for impact in an emergency situation?
Narrow artificial intelligence doesn't know what it's doing. It's not general AI.
But "Jarvis" or "Vision," the characters in The Avengers movies, are artificial general intelligence (AGI). They simulate human intelligence much more than narrow AI does.
In reality, we're far from achieving AGI in the opinion of many, because science still does not know exactly what human consciousness is or what a thought is - and we're very far from creating any such a thing without even a definition. When we don't understand it, there's nothing to compute.