From medical imaging and language translation to facial recognition and self-driving cars, examples of artificial intelligence (AI) are everywhere. And let’s face it: although not perfect, AI’s capabilities are pretty impressive.
Even something as seemingly simple and routine as a Google search represents one of AI’s most successful examples, capable of vastly more information at a vastly greater rate than humanly possible and consistently providing results that are (at least most of the time) exactly what you were looking for.
The problem with all of these AI examples, though, is that the artificial intelligence on display is not really all that intelligent. While today’s AI can do some extraordinary things, the functionality behind its accomplishments works by analyzing massive data sets and looking for patterns and correlations without understanding the data it is processing. As a result, an AI system relying on today’s AI algorithms and requiring thousands of tagged samples only gives the appearance of intelligence. It lacks any real, common sense understanding. If you don’t believe me, just ask a customer service bot a question that is off-script.
AI’s fundamental shortcoming can be traced back to the assumption at the heart of most AI development over the past 50 years, namely that if difficult intelligence problems could be solved, the simple intelligence problems would fall into place. This turned out to be false.
In 1988, Carnegie Mellon roboticist Hans Moravec wrote, “It is comparatively easy to make computers exhibiting adult-level performance on intelligence tests or playing checkers, and it is difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” In other words, the difficult problems turn out to be simpler to solve and what appear to be simple problems can be prohibitively difficult.
For full article please visit here.