In a previous post, Byron Galbraith has shown a demonstration of a neural model for autonomous reaching and grasping in a virtual environment (VE). The primary objective of this project is to develop an adaptive robot that interacts with a human user, potentially paralyzed EEG-based brain-machine interface (BMI, in collaboration with the CELEST Neural Prosthesis Lab). Using a VE is a standard procedure in the Neuromorphics Lab, which allows to experiment ad libitum in software to, basically, get the model right, before dealing with physical limitation of robots. The following video shows a success story of this approach.
Specifically, the agent is an iRobot Create enhanced with a rotatable camera and robotic arm. Using EEG signals, subjects will be tasked with navigating the robot to a desired location in a room, orienting the camera to fixate upon a target object, and picking up the attended object with the robotic arm.
This complex task is broken into two major components: 1) EEG-based robotic navigation / object selection and 2) biologically inspired, autonomous and goal-directed robotic movement and arm control.
This collaborative effort combines research in the BU Neuromorphics and Neural Prosthesis Lab, the latter involved in developing algorithms for decoding 2D movements via EEG of motor imagery. The project has significant potential for clinical applications. Patients suffering from severe motor impairment could regain some agency through control of these mobile devices, increasing their quality of life. Novel commercial applications for healthy subjects are also possible.
The video below shows the initial implementation of the neuromorphic algorithm controlling reaching in a virtual environment, based on the DIRECT model.
The video below shows the same algorithm ported to an actual iRobot Create with a 6-DOF robotic arm.
Videos courtesy of Byron Galbraith.