ViGuAR (VIsually GUided Adaptive Robot)

An important goal of learning is to improve the efficiency of an organisms' interactions with its environment, thereby increasing its chances of survival. In the SyNAPSE project we integrate recent advances in neuromorphic engineering, computational modeling, and robotics, to design robotic agents capable of interacting and learning in a natural environment and in real time.

The purpose of this project was to create an animat that could replicate a basic skill of biological intelligence: learn to approach positively rewarded objects while avoiding negatively rewarded ones. In this experiment, the attractiveness of an object is based on a supervised signal (external reward), but it is easy to envision situations where the reward is associated with environmental signals (e.g., availability of power to recharge the robot battery as a positive reward, or the contact of a bump sensor as a negative reward, etc).

As a first step in the creation of an animat that could demonstrate visually guided adaptive behavior, we created an artificial nervous system, MoNETA, based on Cog Ex Machina (or Cog). Cog, built by HP principal investigator Greg Snider, is a neural modeling operating system that lets neural designers interact with the underlying hardware to do neuromorphic computation. Cog abstracts underlying storage hardware and allocates processing resources as required by computational algorithms based on CPU/GPU availability. Cog exposes a programming interface that enforces synchronous parallel processing of neural data encoded as multidimensional arrays (tensors). In this implementation, Cog allows the design of complex brain systems that controls an iRobot Create.

The robot, which includes some of MoNETA basics features, explores a world of color objects. It navigates toward an object if it perceives its color as attractive based on a reward value associated with object color from the animist past interaction history. The animat also learns the locations of objects it has visited in the past in order to avoid these locations in the future exploration of its world.

This behavior was simulated using Cog 2.0 based brain that communicated with a netbook attached to a robot serial port via WiFi network (see figure below)

Figure iRobot Simulation Environment

A neuromorphic architecture of the VIsually GUided Adaptive Robot (ViGuAR) brain (figure below) is design to support visually guided adaptive navigation.

Figure The ViGuAR brain

In a simplified version of a world that consists of red and green color objects of fixed size (Fig. 1), the Color Detection System converts RGB input it receives from WEB-camera into chromatic features: redness andgreenness for each location.

The Color Attractiveness Analysis Module computes the attractiveness for each location based on plastic synaptic weights associated with chromatic features: redness and greenness. This module produces a measure of attractiveness for each spatial location. The synapses associated with chromatic features are adjusted on a signal from the Reward System.

The Reward System receives an input from the robot bumper sensors when this contacts an object and produces a teaching signals for the Color Attractiveness Module to associate chromatic features of the contacted object with the reward.

The Boundary Contour System converts the color signal associated with each spatial location into a boundary signal. In a simplified version of ViGuAR brain the discontinuity in color features are recognized as object boundaries, which are sent to the Goal Selection System.

The Feature Contour System receives the attractiveness signal for each spatial location. These signals are gated by signals from the Visited Object Map, which block the attractiveness for the locations belonging to the objects the animat has already visited.

The Goal Selection System analyzes attractiveness at each spatial location in order to find the most attractive goal in its view. It uses the boundary signal to evaluate the distance to a target and generates a proper motion signal. Alternatively, this module may decide to continue collection of attractiveness data by spinning the robot and to collect measure of attractiveness from its surrounding. The Goal Selection System integrates attractiveness data from multiple views in order to select an optimal goal and generate a proper motor command.

The commands are controlled by the Time Motor Control unit that maintains and terminates proper motor activity (Motor Output) on robot effectors.

The Movement Storage Unit updates a vector of movement of the animat upon completion of each movement. Thus, the location of the robot is constantly maintained in a short memory with respect to the initial robot position.

Upon contact with an object, the animat position is used to place object location on the map internally maintained by the Visited Object Map module. The signals from the Visited Object Mapmodule are sent back to the Feature Contour System to produce location attractiveness used by the Goal Selection System.

Cog-based Implementation of ViGuAR Brain

The high level diagram of ViGuAR (Fig. 2) was implemented as a Cog-based brain model shown in the figure below. Two independent pathways produce boundary and color attractiveness information that gets integrated by the Target Selection Module. This model reduces a two-dimensional visual RGB input to a single dimension as the robot seeks to navigate in a horizontal plane. Thus, initial two-dimensional retinomorphic representation of visual information gets squeezed into a single dimension. One-dimensional neural activity is centered to the gaze direction in a robocentric coordinate frame.

ViGuAR maintains two-dimensional space representation memory map. An active entry in this map corresponds to a spatial location that is learned as belonging to an object. This map gets projected into a robot 1D retina to gate visual signal received by the robot in the direction of its head orientation

Figure The Cog Ex Machine implementation of the VIGUAR brain


ViGuAR performs a search for an attractive object by analyzing attractiveness of a particular space location. This is done independently from the object boundary determination. Once an attractive location is found, the robot orients toward it and integrates attractiveness with object boundaries determined an independent BCS system. This allows the goal selection module to determine the distance that the robot needs to travel to reach the target.

This division of labor is computationally effective in selecting goals evaluation of distance. Its role can be compared with the role attention plays in biological visual systems. Similarly to biological systems, certain locations are primed as not worthy of paying attention. In ViGuAR, this is done by the object map projection to the robot retina. This projection blocks visual signal from the locations iRobot already visited, therefore preventing the robot from reaching them.

Learning occurs upon contact with an object. On contact, the synaptic weights representing color features forming color attractiveness learn according to an external reward signal. The reward signal changes the magnitude of the synaptic weights associated with the color features of the object approached by the robot in positive or negative direction.

References

Livitz G., Versace M., Gorchetchnikov A., Vasilkoski Z., Ames H., Chandler B., Leveille J. and Mingolla E. (2011) Scalable adaptive brain-like systems, The Neuromorphic Engineer: 10.2417/1201101.003500 February 2011. PDF

Livitz G., Ames H., Chandler B., Gorchetchnikov A., Leveille J., Versace M. and Mingolla E (2011). Visually-Guided Adaptive Robotic Agent (ViGuAR). Submitted to the International Conference on Cognitive and Neural Systems (ICCNS) 2011, Boston, MA, USA.

Livitz G., Ames H., Chandler B., Gorchetchnikov A., Leveille J., Vasilkoski Z., Versace M., Mingolla E., Snider G., Amerson R., Carter D., Abdalla H. and Qureshi, M.S. (2011) Visually-Guided Adaptive Robot (ViGuAR). Submitted to the International Joint Conference on Neural Networks (IJCNN) 2011, San Jose, CA, USA.

Glabraith B., Versace M., and Chandler B. (2011) Asimov: Middleware for Modeling the Brain on the iRobot Create. Submitted to PyCon, Atlanta, March 11-13th 2011.

NL team working on this project: Gennady Livitz, Anatoli Gorchetchnikov, Jasmin Leveille, Heather Ames, Ben Chandler, Ennio Mingolla, Max Versace

Collaborators

The Neuromorphics Lab is highly collaborative with connections across both academia and industry.