All Projects

SNN / DVS Projects

Fast Event Classification

High-speed classification of event-camera streams using sparse, asynchronous representations optimised for low-latency neuromorphic pipelines.

DVSClassificationSNNLow-latency

Event Stream / Classification Animation placeholder

Replace with a DVS event-stream visualisation or classification demo video.

Overview

Dynamic Vision Sensors (DVS) output asynchronous streams of events rather than frames, enabling ultra-low-latency perception. This project investigates classification methods that exploit the sparse, asynchronous nature of event data to achieve fast, energy-efficient recognition without waiting for a full frame accumulation window.

Event Representation

Placeholder — describe the event representation used (e.g., time surfaces, voxel grids, spike trains, raw event lists) and the rationale for the chosen approach in the context of low-latency classification.

Classification Architecture

Placeholder — describe the network architecture, whether SNN, graph-based, or hybrid, and how it processes events asynchronously or in micro-batches to minimise classification latency.

Results

Placeholder for accuracy, latency (time-to-first-classification), and energy metrics. Add benchmark comparisons against frame-based baselines here.