Brain models developed in the lab process sensory information coming from the virtual or real world and produce motor commands to control a robot in the real world or an animat in a virtual environment. To complement brain modeling language (Neural Algebra) and neural modeling operating environment (Cog Ex Machina), the Neuromorphic Lab is developing a VirtU (Virtual Universe) – a programmable virtual environment based on an open source engine JMonkey. VirtU simulates sensory data required by brain models and processes the brain model’s output thus, generating an observable motor behavior not only in a fashion similar to real sensors and effectors, but also encapsulated within the same API. The latter will make transition between virtual and real world seamless for any given brain model.
Figure Virtual Universe – VirtU . Interactions with Cog Ex Machina Architecture Components.
Figure An example of Virtual Environment. Below, a snapshot of the Mars VE for the NASA STTR project and the Morris Water Maze virtual environment.
An animat living in a virtual world has virtual sensory organs: eyes, ears, etc. collecting virtual sensory information from the virtual sources, represented by the VirtU objects such as terrains, plants, buildings, or animats. The animat can collide with a VirtU object, which thereby becomes a source of touch sensory data received by touch sensory organs. Virtual sensory data are sent to the corresponding fields in brain models where they are processed in order to produce relevant cognitive behavior.
VirtU also supports general purpose custom events representing external intervention. These events can be designed to facilitate a run-time change in a virtual environment or in animat’s behavior. VirtU allows placement of multiple video cameras at arbitrary world’s locations in order to observe the virtual world dynamics.