Biologically inspired recurrent neural networks are computationally intensive models that make extensive use of memory and numerical integration methods to calculate neural dynamics and synaptic changes. The recent introduction of architectures integrating nanoscale memristor crossbars with conventional CMOS technology has made possible the design of networks that could leverage the future introduction of massivelyparallel, dense memristive-based memories to efficiently implement neural computation.
Despite the clear advances givenby memristors, the implementation of neural dynamics in digital hardware still presents several challenges. In particular, large scale multiplications/additions of neural activations and synaptic weights are largely inefficient in conventional hardware, leading, among other things, to power inefficiencies.
The Neuromorphics Lab, in collaboration with our colleagues at HP, is working on a methodology based on fuzzy inference to reduce the computational complexity of such networks by replacing multiplication and addition with fuzzy operators. Our method employs fuzzy inference systems to evaluate the learning equations of two widely used variants of Hebbian learning laws, pre- and post-synaptic gated decay. We have tested this approach in a recurrent network that learns a simple dataset, and have compared the fuzzy and canonical implementation.
Our results show that the behavior of the fuzzy network using with min and max operations is similar to that of networks that employ regular multiplication and addition (see Figure 4 in an example on the Lena image), while yielding better computational efficiency in terms of number of operations used and compute cycles performed. Using min and max operations we can implement learning more efficiently inmemristive hardware, which translates into power savings (See figure 6 for an analysis of computational complexity).