News Article

From Top to Bottom: Getting Robots to Perceive More Like Humans
Date: Nov 01, 2013
Source: Company Data ( click here to go to the source)

Featured firm in this article: Aptima Inc of Woburn, MA




Where are you? It's a simple answer as a human, but passing that knowledge on to a robot has long been a complex perception task.
Cognitive Patterns aims to leverage what researchers already know about how humans perceive their environment and exploit that information when building autonomous systems. The prototype software developed by Massachusetts-head-quartered company Aptima Inc. leverages the open-source Robot Operating System, or ROS, to enable this kind of processing on any type of platform.
"One of the things that's not particularly commonly known outside of cognitive science … is that humans perceive and really think about very selective aspects of the world and then build a whole big picture based on things we know," says Webb Stacy, a cognitive scientist and psychologist working for Aptima.
In essence, humans perceive a large portion of defining where they are through their brains and not through their surroundings. Past experiences feed future expectations about, for instance, what objects might be in a room called an office or a cafeteria.
"You don't have to really pick up information about exactly what a table, phone or notebook looks like, you already kind of know those things are going to be in an office, and as a result the process of perceiving what's in an office or making

Aptima's Cognitive Patterns architecture enables robots to make sense of their environment, much like how humans do. This robot has Cognitive Patterns integrated onto it for a test. Photo courtesy Aptima.
sense of the setting is as much coming from preexisting knowledge as it is directly from the senses," he says.
This way of perceiving is called top-down, bottom-up processing. However, machine perception is typically driven by what Stacy terms bottom-up processing, where data from sensors are streamed through mathematic filters to make sense of what an object might be and where a robot is in relation.
Cognitive Patterns is revolutionary, says Stacy, because it provides a knowledge base for a robot, so it is able to combine a knowledge base with visual data provided by sensors to extrapolate information about its environment in a fashion more akin to top-down, bottom-up processing.
Aptima secured funding for Cognitive Patterns through a small business innovative research grant from DARPA, and Stacy stresses that what the company is doing is practical and can be done without a multimillion-dollar budget.
The first phase of the project was limited to simulated models, whereas the second phase put the software on a robot.

"When you move from simulation to physical hardware, you're dealing with an awful lot of uncertainty," he says. "So it's one thing in the simulation to say, here are some features that get presented to the system for recognition. It's another thing on a real robot wandering around perceiving things."
However, Stacy says using ROS makes it easier to go from simulation to real-world application.
In phase two, Aptima integrated the cognitive aspects of its software with visual information inputted by sensors built by iRobot.
To test the prototype, Aptima secured the use of a local Massachusetts middle school, so the company could test the principles of its software in a real-world environment.
For this second phase, Aptima's DARPA agent worked at the Army Research Laboratory. This allowed the company to integrate Cognitive Patterns with a robotic cognitive model the ARL had, called the Sub-Symbolic Robot Intelligent Controlling System, or SS RICS. That program combines language-based learning with lower level perception. Stacy says Cognitive Patterns aligns somewhere between the two.
The system has an operator interface that Stacy says allows the user to communicate with the robot on a high level.
"The operator might say, ‘I don't have a map. Go find the cafeteria and see if there are kids there,'" he says. "Now that's a very high level of command, because for a machine there's all kinds of stuff that needs to figure it out there. … And one of the really interesting things we did here is we looked to see if we couldn't generate new knowledge whenever we encountered something that we didn't understand."
The operator can place alternate reality, or AR, tags on certain objects, and they act as a proxy for using computer vision to recognize those objects. Then the robot is able to learn those features and apply them to future scenarios. For instance, the team did this once when a robot using Cognitive Patterns was in a room with a weapons cache, so next time it entered a similar scenario, it would have a cognitive model on which to base its perceptions. The operator can also tell the robot to ignore certain surroundings or label special things in its environment, such as determining what a student desk is, because that object is a blend of a traditional school chair and a separate desk.
This higher level of communication benefits the operator, says Stacy, because, aside from software developers, most operators won't care about the detailed features being extracted from the robot's camera system.
Finding New Uses
Now Aptima is working on another prototype as a follow on to its DARPA contract — implementing Cognitive Patterns onto prosthetics.
The company is working on an intelligent prosthetic through a contract with the Office of Naval Research and the Office of the Secretary of Defense where a prosthetic would be controlled by the neurosignatures of a person's brain interfacing the limb with the user's mind.

"The arm's going to have sensors on it, and so it's going to be doing the same kind of thing the robot is doing with Cognitive Patterns, which is perceiving its environment, knowing the things it can reach for," he says.
Through collaboration with a prosthetics expert, Aptima is using its software to let the arm communicate with its user in a more natural way, such as through muscle twitches.
Perfecting Perception
The end goal of this work addresses the current disconnect in how robots could best perceive their environments, with the machine vision community pushing optimal mathematical models to perfect bottom-up processing, while the cognitive science community is seeking out the best cognitive model to apply to get a robot to think, says Stacy.
"We are starting to see the need to hook up with each other, and Cognitive Patterns really is the intersection between those two. … If that happens, we'll have robots that can really see and understand the world the way that humans do, and that's been elusive for a long time in robotics."