The event is part of the CELEST Catalyst Breakfast Series and is open to anyone, particularly to those interested in working collaboratively with industry or developing research that could be applied outside of the academia.
About the Event
- Tuesday, April 13, 2010
- 10:00 AM
- Room B02
- Department of Cognitive and Neural Systems
- Boston University
- 677 Beacon St
- Boston MA, 02215
How can we build machines capable of learning that are small, cheap and power-efficient? The engineering challenges (power, computational primitives, communication) and algorithmic challenges (representation and coding, normalization, invariances, stability, homeostasis, ...) are scary. The bias/variance dilemma strongly suggests that, as a practical matter, neural hardware systems require the incorporation of prior knowledge into their design; we cannot simply build a “hardware soup” of interacting neuron processors and hope that intelligence will emerge, at least not in a time frame that would make such systems useful.
Intelligent electronic systems that learn will require as much cleverness and insight in their design and structure as any other engineered system. The Cog Ex Machina platform is being developed to support a wide variety of biologically-inspired computational models (such as directed and undirected graphical models, energy-based models, planning and navigation models, etc.) while reducing the power and area required for their implementation by several orders of magnitude.
Memristive nanodevices enable efficient implementation of dendritic and axonal arborization. I will describe the base abstractions and overall architecture of the platform, and show some examples of computation and learning.
Refreshments will follow the presentation.
About Greg Snider
Greg Snider describes himself as a dabbler in many domains and master of none. He has worked in analog and digital circuit design, medical instrumentation, communications, processor design, network protocols, digital signal processing, operating systems, compilers, logic synthesis, hardware and software systems architecture, and nanodevice research.
He was the architect of Teramac, a defect-tolerant, massively-parallel simulation engine built from several hundred custom FPGAs. Currently he is the principal investigator for the DARPA SyNAPSE program at Hewlett-Packard in conjunction with Boston University and UCLA.