The idea of humanoid robots - of intelligent machines which are perhaps more formidable than the humans who made them - is a concept that's been a staple of science fiction almost since its inception. It's no surprise that it is, really. I mean, come on - robots are inarguably one of the coolest things ever.
Nowadays, it feels like the sorts of robots we see in science fiction are well on their way to becoming a reality. 'Intelligent' machines - technology which was effectively inconceivable less than fifty years ago - have taken the market by storm. Everyone's all abuzz about how we might see robots coming very near to human intelligence within the next decade.
I'd love to believe it. I'd love to think that we've managed to create real life; that we've managed to create living, thinking machines. Thing is...I'm not so sure that's possible. Not yet, at any rate. Not for a long time. The unfortunate truth is that the intelligence we've developed so far is...well, let's just say there's a big emphasis on the 'artificial' side of things.
Sure, we've managed some pretty incredible stuff. We have learning machines. We have AI that can develop games. We have programs that can hold conversations. At the same time, though; these machines aren't actually intelligent in the conventional sense. They don't have anything even remotely approaching even the most basic of human thought processes.
Professor Luciano Floridi of Oxford probably put it a little better than I can:
"If I were to summarize the history of AI over the past 60 years or so, you'd see a dichotomy," he explains in a recent YouTube video. "On the one hand, you have AI as part of the engineering department. That means that it doesn't matter whether or not there's really AI; what matters is that it does a good job. The joke is "don't ask whether a submarine can actually swim as long as it does what it's supposed to do." Is it really swimming? That's not the issue."
"On the other hand, you've AI as part of the cognitive science department. At this point, it's not about the job, it's about the known biological implementation of intelligence. Maybe it's just a little bit of intelligence - it's not as intelligent as your dog - but it's an enormous success if it's intelligent at all. This has been a complete disaster - we have no intelligence whatsoever to speak of in terms of AI today as you'd expect from a cognitive science perspective, but we have amazingly smart machines. "
In short, we have machines that are very good at accomplishing particular tasks, but not terribly intelligent in the traditional sense. But have we really only managed to engineer a facade of intelligence? Could we really just be reaching at AI?
If that's really the case, how can we explain all the advances we've made in just the past few decades?
"I'd like to call this 'enveloping the world," explains Floridi. "The best thing to do is introduce it with an analogy. You have several robots in your kitchen. They're called the fridge, dishwasher, and so on. The dishwasher does exactly what a robot does in an industrial building. You transform a whole environment into something that is friendly to the very stupid little robot inside. Nobody in his right mind - certainly not me - would wash the dishes the way a dishwasher does it."
"The mechanical elements of the dishwasher are your agent; they're quite stupid. On the other side, you have a complex environment. Instead of trying to make the agent adapt to the environment, you make the environment friendly to the stupid agent. We are enveloping the world to make sure that the agent - which is stupid - can work there."
"The idea," he continues, "is that we are transforming our environment into an AI-friendly environment. Interacting with the computer has again become something you do with your hands, as it was in the 70s; the computer is around you; you are connected 24-7."
In essence, according to Floridi, we've made very few advances where cognitive computing is concerned. Rather, we've managed to build machines - or agents- that are very good at pretending to be intelligent; we then wrap these machines in an environment (ours) that's friendly to how they function. It's all algorithms.
That's a bit of a letdown, isn't it?
Perhaps the problem here doesn't lie with the advances we've made. Rather, it may well be that we simply need to re-frame our definition of intelligence and re-think how we define it. After all, intelligence as we're considering it here is a rather broad and human concept, isn't it? Our 'smart' machines may well be nothing more than algorithms and equations, but how are they different from the electrical and chemical impulses that drive human behavior?
I feel as though we've made more advances in the field of cognitive computing than Floridi may be willing to admit. However, there's definite merit to what he says. We've not achieved true machine intelligence - not by human standards, anyway - by any means, and we may not do so for some time. As such, we're not going to be seeing anything too advanced for a while yet...though I suspect we may indeed be very close to machines that can truly think on their own. Whether or not those machines will appear as we'd expect them do is another matter altogether.