New visual system

The new visual system

We are currently redesigning most of the whole-brain systems components, with a new virtual environment including a powerful physic engine. The MoNETA 2.0 visual system will
provide a more powerful visual system for both classification and  navigation in both real and virtual environment. The vision part of MoNETA should support navigation by identifying perceived objects and localizing them within the environment. A distinctive feature of MoNETA vision is that in order to make decisions it uses many principles and inspirations from real animal visual systems. The central area of the image perceived by MoNETA is magnified to reflect a higher density of receptor in the fovea.

Visual images are processed with simulated neurons whose properties are similar to primates’ color visual system, including long-, middle-, and short-wave cones. The output of the retinal cones is arranged in both chromatically and spatially opponent combinations to produce separate chromatic and achromatic channels, which provide input to the next layer of cortical cells whose receptive fields are self-organized by exposing MoNETA to the virtual environment, using purely unsupervised learning methods.

Figure The new virtual environment and visual system input.

Figure Learning visual representation in the new MoNETA.

MoNETA’s visual system will also make use of eye movements (both random saccades and top-down attentional signals) and binocular visual system, allowing perception of depth and recognition of 3-D objects.

References

Leveille, J., Livitz, G., Ames, H., Chandler, B., Gorchetchnikov, A., Versace, M., Mingolla, E. and Snider, G. (2011). Learning to see in a virtual world. Neuroinformatics 2011, Boston, MA.

NL team working on this project: Jasmin Leveille, Gennady Livitz, Anatoli Gorchetchnikov, Ennio Mingolla, Max Versace

Collaborators

The Neuromorphics Lab is highly collaborative with connections across both academia and industry.