Neuromorphic Attention Models for Event Data

The humain perception of a complex visual scene requires a cognitive process of visual attention to sequentialy direct the gaze towards a visual region of interest to acquire relevant information selectively through foveal vision, that allows maximal acuity and contrast sensitivity in a small region around the gaze position — whereas peripheral vision allows for a large field of view, albeit with lower resolution, contrast sensitivity. This cognitive process mixes bottom-up attention driven by saliency and top-down attention drivent by the demands of the task (recognition, counting, tracking, etc.) While numerous work have investigated visual attention in standard RGB images, it has barely been exploited for the recently developed event sensors (DVS). Inspired by human perception, the interdisciplinary NAMED project aims at designing neuromorphic event-based vision systems for embedded platforms such as autonomous vehicles and robots. A first stage will investigate and develop new bottom-up and top-down visual attention models for event sensors, in order to focus processing on relevant parts of the scene. This stage will require to understand what drives attention in event data. A second stage will design and implement a hybrid digital- neuromorphic attentive system for ultra-fast, low-latency, and energy-efficient embedded vision. This stage will require to set up a dual vision system (foveal RGB sensor and parafoveal DVS), to design Spiking and Deep Neural Networks, and to exploit a novel system-on-chip developed at ETH Zürich. A last stage will validate and demonstrate the results by applying the robotic operational platform to real-life dynamic scenarios such as autonomous vehicle navigation, ultra-fast object avoidance and target tracking.


Project type: ANR PRCI with Swiss NSF (project number ANR-23-CE45-0025-01)

Project start: Feb 1st 2024

Project ends: Jan 31st 2028

Contact: Jean Martinet (jean.martinet@univ-cotedazur.fr)