Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Nov 21;24(23):7430.
doi: 10.3390/s24237430.

Adaptive Optimization and Dynamic Representation Method for Asynchronous Data Based on Regional Correlation Degree

Affiliations

Adaptive Optimization and Dynamic Representation Method for Asynchronous Data Based on Regional Correlation Degree

Sichao Tang et al. Sensors (Basel). .

Abstract

Event cameras, as bio-inspired visual sensors, offer significant advantages in their high dynamic range and high temporal resolution for visual tasks. These capabilities enable efficient and reliable motion estimation even in the most complex scenes. However, these advantages come with certain trade-offs. For instance, current event-based vision sensors have low spatial resolution, and the process of event representation can result in varying degrees of data redundancy and incompleteness. Additionally, due to the inherent characteristics of event stream data, they cannot be utilized directly; pre-processing steps such as slicing and frame compression are required. Currently, various pre-processing algorithms exist for slicing and compressing event streams. However, these methods fall short when dealing with multiple subjects moving at different and varying speeds within the event stream, potentially exacerbating the inherent deficiencies of the event information flow. To address this longstanding issue, we propose a novel and efficient Asynchronous Spike Dynamic Metric and Slicing algorithm (ASDMS). ASDMS adaptively segments the event stream into fragments of varying lengths based on the spatiotemporal structure and polarity attributes of the events. Moreover, we introduce a new Adaptive Spatiotemporal Subject Surface Compensation algorithm (ASSSC). ASSSC compensates for missing motion information in the event stream and removes redundant information, thereby achieving better performance and effectiveness in event stream segmentation compared to existing event representation algorithms. Additionally, after compressing the processed results into frame images, the imaging quality is significantly improved. Finally, we propose a new evaluation metric, the Actual Performance Efficiency Discrepancy (APED), which combines actual distortion rate and event information entropy to quantify and compare the effectiveness of our method against other existing event representation methods. The final experimental results demonstrate that our event representation method outperforms existing approaches and addresses the shortcomings of current methods in handling event streams with multiple entities moving at varying speeds simultaneously.

Keywords: event cameras; event representations; slicing methods.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Schematic diagram of the human retina model and corresponding event camera pixel circuit.
Figure 2
Figure 2
(a) We consider the light intensity change signals received by the corresponding pixels as computational elements in the time domain. (b) From the statistical results, it can be seen that the ON polarity ratio varies randomly over the time index.
Figure 3
Figure 3
This graph represents the time span changes of each event cuboid processed by our algorithm.
Figure 4
Figure 4
This figure illustrates the time surface of events in the original event stream. For clarity, only the x–t components are shown. Red crosses represent non-main events, and blue dots represent main events. (a) In the time surface described in [50] (corresponding to Formula (24)), only the occurrence frequency of the nearest events around the main event is considered. Consequently, non-main events with disruptive effects may have significant weight. (b) The local memory time surface corresponding to Formula (26) considers the influence weight of historical events within the current spatiotemporal window. This approach reduces the ratio of non-main events involved in the time surface calculation, better capturing the true dynamics of the event stream. (c) By spatially averaging the time surfaces of all events in adjacent cells, the time surface corresponding to Formula (29) can be further regularized. Due to the spatiotemporal regularization, the influence of non-main events is almost completely suppressed.
Figure 5
Figure 5
Schematic of the Gromov–Wasserstein Event Discrepancy between the original event stream and the event representation results.
Figure 6
Figure 6
Illustration of the grid positions corresponding to non-zero entropy values.
Figure 7
Figure 7
Grayscale images and 3D event stream diagrams for three captured scenarios: (a) Grayscale illustration of the corresponding scenarios; (b) 3D event stream illustration of the corresponding scenarios.
Figure 7
Figure 7
Grayscale images and 3D event stream diagrams for three captured scenarios: (a) Grayscale illustration of the corresponding scenarios; (b) 3D event stream illustration of the corresponding scenarios.
Figure 8
Figure 8
The variation of the value of GWEDN corresponding to each algorithm with different numbers of event samples.
Figure 9
Figure 9
Illustration of the event stream processing results for Scene A by different algorithms: (a) TORE; (b) ATSLTD; (c) Voxel Grid; (d) MDES; (e) Ours.
Figure 10
Figure 10
APED data obtained from the event stream processing results for Scene A by different algorithms.
Figure 11
Figure 11
Illustration of the event stream processing results for Scene B by different algorithms: (a) TORE; (b) ATSLTD; (c) Voxel Grid; (d) MDES; (e) Ours.
Figure 12
Figure 12
APED data obtained from the event stream processing results for Scene B by different algorithms.
Figure 13
Figure 13
Illustration of the event stream processing results for Scene C by different algorithms: (a) TORE; (b) ATSLTD; (c) Voxel Grid; (d) MDES; (e) Ours.
Figure 14
Figure 14
APED data obtained from the event stream processing results for Scene C by different algorithms.

Similar articles

References

    1. Lichtsteiner P., Posch C., Delbruck T. A 128 × 128 120 dB 15 μs latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits. 2008;43:566–576. doi: 10.1109/JSSC.2007.914337. - DOI
    1. Brandli C., Berner R., Yang M., Liu S.-C., Delbruck T. A 240 × 180 130 db 3 µs latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circuits. 2014;49:2333–2341. doi: 10.1109/JSSC.2014.2342715. - DOI
    1. Posch C., Matolin D., Wohlgenannt R. A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS. IEEE J. Solid-State Circuits. 2010;46:259–275. doi: 10.1109/JSSC.2010.2085952. - DOI
    1. Oyster C. The analysis of image motion by the rabbit retina. J. Physiol. 1968;199:613–635. doi: 10.1113/jphysiol.1968.sp008671. - DOI - PMC - PubMed
    1. Murphy-Baum B.L., Awatramani G.B. An old neuron learns new tricks: Redefining motion processing in the primate retina. Neuron. 2018;97:1205–1207. doi: 10.1016/j.neuron.2018.03.007. - DOI - PubMed

LinkOut - more resources