The general objectives of eMorph are to implement embodied intelligence by designing space-variant morphologies and computational structures of neuromorphic sensors together with the development of asynchronous data-driven algorithms that best exploit the properties of the sensor.
To reach these general objectives we will proceed by addressing three specific subgoals:
- design and build VLSI asynchronous, data-driven, neuromorphic visual sensors, evaluating the morphology of the sensor's photosensitive elements and the focal-plane processing implementation on different versions of the visual sensors.
- develop a new methodology, algorithms, and a dedicated hardware infrastructure for processing asynchronous data (i.e. events) in real-time, departing from the standard frame-based approach. In particular by
- developing low-level and event-based visual feature extraction methods on embedded digital hardware, which provide the basic asynchronous computational primitives for higher-level, also asynchronous, machine vision algorithms.
- developing an event-driven adaptive control architecture for processing the Address-Event Representation (AER) sensory data and consequently controlling the humanoid robot actuators. This activity will combine both learning and reconfigurable morphology in the implementation of the control structure.
- validate the potential of this new approach on the humanoid robot fitted with the neuromorphic sensors, and controlled by the asynchronous machine-vision algorithms in real-time behavioral tasks.
In summary, with the eMorph approach, embedded intelligence will arise from the "morphological computation" performed by the sensor and by the algorithms, that incorporate design principles of neural systems such as adaptation and relative, local, cooperative computation that in turn best capture the visual stimuli information content and intrinsically adapt to the time constants of the real world.
The concept of "morphological computation" is applied at many different levels in the project, starting from the exploration of the role of sensor morphology, e.g. implementing space-variant vision, going through the study of the interaction between sensors and robot, up to shaping the computational substrate and algorithms, that for the first time will be adapted to the real-world intrinsic data-driven, asynchronous, characteristics.