Really great think piece on AI and what it is to be human.
http://radar.oreilly.com/2015/06/artificial-intelligence.html
What do we mean by “artificial intelligence”? We like to point to the Turing test; but the Turing test includes an all-important Easter Egg: when someone asks Turing’s hypothetical computer to do some arithmetic, the answer it returns is incorrect. An AI might be a cold calculating engine, but if it’s going to imitate human intelligence, it has to make mistakes. Not only can it make mistakes, it can (indeed, must be) be deceptive, misleading, evasive, and arrogant if the situation calls for it.
That’s a problem in itself. Turing’s test doesn’t really get us anywhere. It holds up a mirror: if a machine looks like us (including mistakes and misdirections), we can call it artificially intelligent. That begs the question of what “intelligence” is. We still don’t really know. Is it the ability to perform well on Jeopardy? Is it the ability to win chess matches? These accomplishments help us to define what intelligence isn’t: it’s certainly not the ability to win at chess or Jeopardy, or even to recognize faces or make recommendations. But they don’t help us to determine what intelligence actually is. And if we don’t know what constitutes human intelligence, why are we even talking about artificial intelligence?
A chunk of text, but pretty much sums it up.
Bottom line, AI is going to take some time to really develop. I suspect that it may not be a black and white case of ‘There it is’. It could well be a case of each time we have some software come close, we simply raise the bar.
I can’t put my finger on it just now, but I recall someone saying the Turing test is about the level of a 4 year old child. Sure, 4 year olds are smart, but really, if that’s the standard for saying AI is here, I’m not sure a bunch of geeks are going to rest there.
Added to the search for AI is the task of simply getting a computer to be a little more flexible in its rule set.
Rather than a simple ‘if the room is empty, turn off the light’, we need something like ‘if someone leaves the room, but another person is walking to the room, leave the light on’. (Weak example, but you (hopefully) get my point).
In other words, most computer code reacts, not predicts.
Even this sort of AI is out of my grasp at the moment.