Neural Plasticity

Biological neural networks must balance the opposing properties of plasticity and stability, which are thought to be the fundamental underpinnings of learning and memory. On the one hand, a system that learns too readily may too easily forget, while one that preserves memory perfectly necessarily is strongly limited in what it may learn. This lab project seeks to investigate the mathematical principles behind these properties in large networks, particularly with respect to behavioral tasks. That is, we would like to discover how this balance is reached using purely local information to perform complex, global functions, such as navigation or sound recognition and production.

A related problem is that of network homeostasis, or the maintenance of the desirable behaviors of the network as a whole. A particularly useful property is the ability to maintain function in the face of damage; biological networks appear to handle this case with ease, and by understanding how this can be done, we hope to provide insights toward the production of damage resilient hardware.

Figure. An example of a plasticity in the simulated MoNETA visual cortex, with the development of oriented cortical cells (left, with each color corresponding to a group of cell that code for a particular orientation) and ocualr dominance columns (right, with gray values corresponding to preferential response of cells to either left or right eye). These representations tend to drift over time with repeated exposure to the environmental input. This project investigates neural mechanisms to overcome this issue.

Eventually, the goal is to embed networks that exhibit the "right cocktail" of plasticity and stability in large-scale systems such as MoNETA, where the discovered laws can be use to build plastic laminar cortical circuit able to dynamically self-organize their activity.

The applications of this basic research go beyond robotics, and apply to upcoming massively parallel multicore processors that are starting to be available in the mass market. An example of the next generation, highly parallel processors is being developed by Adapteva, Inc. We believe this and similar hardware will benefit greatly from biologically inspired algorithms, which can exploit local information to produce globally effective behaviors. One goal of our project is to discover local rules that would allow chips like this to maintain functionality, should individual cores begin to fail.

NL team working on this project: Chris Johnson, Tim Gardner, Max Versace

Collaborators

The Neuromorphics Lab is highly collaborative with connections across both academia and industry.