Neuromorphic Reaching and Grasping in an iRobot Create

By Max Versace | March 2, 2014

The primary objective of this project is to develop an adaptive robot that interacts with a human user, potentially paralyzed, via an EEG-based brain-machine interface (BMI). Using a VIrtual Environment (VE) is a standard procedure in the Neuromorphics Lab, which allows to experiment ad libitum in software to, basically, get the model right, before dealing with physical limitation of robots. A description of prior posts on the topic can be found here. This video shows how the user can control a virtual replica of the iRobot Create robot to look for, reach, and grasp an object of interest. 

The robotic agent is an iRobot Create enhanced with a rotatable camera and robotic arm. Using EEG signals, subjects will be tasked with navigating the robot to a desired location in a room, orienting the camera to fixate upon a target object, and picking up the attended object with the robotic arm.

This complex task is broken into two major components: 1) EEG-based robotic navigation / object selection and 2) biologically inspired, autonomous and goal-directed robotic movement and arm control.
The video below shows the EEG interface. Videos courtesy of Byron Galbraith (nl.bu.edu)

 

 

 

The video below shows the behavior of the iRobot Create equipped with a robotic arm as it looks for, reaches, and grasps the requested object.

 

Collaborators

The Neuromorphics Lab is highly collaborative with connections across both academia and industry.