The Neuromorphics Lab brings Artificial Brains to the Science Museum, Boston

By Max Versace | September 3, 2014

SAMSUNGBen Lawson (Neuromorphics Lab, UROP Program at Boston University) and Tim Seemann (former Neuromorphics Lab, now at Neurala) have worked together in the summer 2014 to devise new algorithmic implementation for a Mars Rover permanent exhibit Neurala and the Neuromorphics Lab are managing at the Science Museum, Boston. The UROP project, titled "Are Virtual Tests with Object Detection Algorithms Practical for Real World Applications?", focuses on improving the autonomous capability of the robot to visually detect objects of interest. Object detection is a developing topic within the field of computer vision. The human eye can detect objects easily, MoSExhibitPORfrom different angles, poses, lighting conditions. While humans can perform visual recognition tasks easily, robots are still lagging behind.  I worked with the BU’s Neuromorphics Lab on a robot deployed in a permanent exhibit at the Science Museum, Boston. The visitor drives a Mars Rover to recover the black box of another strained Rover, and then samples some rock composition. The robot helps users by “perceiving” two labels, one on the rover, and one on the rock, which prompts the user to execute an action. To improve the performance of a prior visual recognition algorithm, we have tested and implemented three algorithms: Haar-like Cascade Classifiers, Speeded Up Robust Feature (SURF) Detection and Extraction with Fast Approximate Nearest Neighbor Search Library (FLANN) Matching, and a custom version of Dual-Color Detection and Localization. To test the usefulness of each algorithm, we developed two performance testing protocols, one based on virtual datasets and another on a real-world dataset, using positive hit rates and false alarm rates. The results suggest that virtual testing returns more favorable results than real world testing. To further characterize each classifier, we tested with datasets containing complex objects (i.e. magazines), medium-complexity objects (i.e. faces), and simple objects (i.e. exhibit logos). Our Dual-Color Detection and Localization works best for our goal, identifying a simple, dichromatic logo. Once the algorithm detects the target color, either blue or purple, it checks to see if it is within a green box. SURF detection and extraction and FLANN matching is best for complex, specific objects. Haar-like Cascade Classifiers work best for objects of general complexity. Future goals include using this information to build more complex programs to classifier a grand variety of objects.           ClassifierComparsion    

Collaborators

The Neuromorphics Lab is highly collaborative with connections across both academia and industry.