Seems like just about everyone is fascinated by robots lately. Never mind all the independent inventors and mad scientists tinkering with robotics in their basement, we've got organizations like NASA, Google, and Amazon all vying for a bigger piece of the pie - and that's just the tip of the iceberg. It's not really much of a surprise, then, that British inventor and industrialist James Dyson decided to get in on the new robotics trend.
Dyson - founder of the Dyson company and inventor of the Dual Cyclone bagless vacuum cleaner - this week founded the Dyson Robotics Laboratory, investing £5 million (around $8.34 million USD) into the Imperial College of London in the process. From the sounds of things, Dyson's looking to give AI development a considerable push forward; he's made it abundandly clear that his goal is to "create machines that see and think the way we do." The first step in doing so involves helping robots better 'see' their surroundings.
With that in mind, it's no surprise that he chose the Imperial College as the site of his new robotics laboratory; the institution is known for having one of the best computer science departments in the country (perhaps even the world). Researchers at the lab will be working to help robots better sense their environment and identify objects within the immediate vicinity. Basically, what this means is that a robot might be able to tell the difference between different types of food, or work out whether or not a particular article is trash.
The focus of the laboratory thus isn't likely to be on general purpose intelligence - at least, not for the short term. Instead, according to Livescience, it's likely that the institute will focus on 3D sensing - "a fairly well understood robotics technology that could realistically produce applications in the near term future." Hardware-wise, such technology is already well within our reach. The real challenge lies in developing software to harness it.
In particular, the Dyson Robotics Laboratory will be making use of a technology known as monocular visual SLAM - meant to stand for Simultaneous Localization And Mapping. The technique - which was developed and pioneered by the lab's head researcher, Andrew Davidson - uses a single camera, which maps out a robot's environment using light that falls within the visible spectrum. It's worth noting that this is quite different from, say, the Kinect, which uses infrared (as seen above).
The reason behind this is simple: normal light reveals information such as color and shadow, allowing for more effective identification of objects and positioning. What this means is that while a robot moves around a room, it's able to note the shape, size, and location of objects, using this information to create a better 3D map. Speaking of mapping, that's where the SLAM technique comes in; the robot finds distinctive elements of an image, then tracks how those elements move as it does.
The single-camera setup, meanwhile, makes the entire array considerably more portable and affordable. Home robots could map their surroundings with very little extra cost, even adding facial recognition and object recognition to the mix. In short, they very well could soon "see" exactly the way we do.
The innovations introduced by the Dyson Robotics Laboratory aren't likely to lead to any incredible breakthroughs in general artificial intelligence over the next few years. That isn't to say what they're doing isn't still incredibly exciting, however. The human mind is a complex thing, after all; if we're going to get robots thinking like us, we're going to need to do it one step at a time. The first step very well could be to make them "see."