This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
What is AI and what is not AI is, to some extent, a matter of definition. There is no denying that AlphaGo, the Go-playing artificial intelligence designed byGoogle DeepMind that recently beat world champion Lee Sedol, and similardeep learning approaches have managed to solve quite hard computational problems in recent years. But is it going to get us to full AI, in the sense of an artificial general intelligence, or AGI, machine? Not quite, and here is why.
One of the key issues when building an AGI is that it will have to make sense of the world for itself, to develop its own, internal meaning for everything it will encounter, hear, say, and do. Failing to do this, you end up with today’s AI programs where all the meaning is actually provided by the designer of the application: the AI basically doesn’t understand what is going on and has a narrow domain of expertise.
No comments:
Post a Comment