Sleep Apnea Detection with Polysomnography and Depth Sensors
- PMID: 32121672
- PMCID: PMC7085736
- DOI: 10.3390/s20051360
Sleep Apnea Detection with Polysomnography and Depth Sensors
Abstract
This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep [d=R2]apneaapnoa events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20-35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep [d=R2]apneaapnoas by a sleep specialist. The resulting classifier can mark all [d=R2]apneaapnoa events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. [d=R2]When compared to the classification of polysomnographic breathing signal segments by a sleep specialistand, which is used for calculating length of the event, the classifier has an [d=R1] F 1 score of 92.2%Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep [d=R2]apneaapnoa events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.
Keywords: breathing analysis; computational intelligence; depth sensors; human-machine interaction; image processing; signal processing.
Conflict of interest statement
The authors declare no conflict of interest.
Figures



References
-
- Schätz M., Centonze F., Kuchynka J., Tupa O., Vysata O., Geman O., Prochazka A. Statistical recognition of breathing by MS Kinect depth sensor; Proceedings of the 2015 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM); Prague, Czech Republic. 29–30 October 2015; pp. 1–4. - DOI
-
- Procházka A., Vyšata O., Vališ M., Ťupa O., Schatz M., Mařík V. Bayesian Classification and Analysis of Gait Disorders Using Image and Depth Sensors of Microsoft Kinect. Elsevier Digit. Signal Process. 2015;47:169–177. doi: 10.1016/j.dsp.2015.05.011. - DOI
-
- Procházka A., Vyšata O., Vališ M., Ťupa O., Schatz M., Mařík V. Use of Image and Depth Sensors of the Microsoft Kinect for the Detection of Gait Disorders. Springer Neural Comput. Appl. 2015;26:1621–1629. doi: 10.1007/s00521-015-1827-x. - DOI
-
- Lachat E., Macher H., Landes T., Grussenmeyer P. Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor) Towards a Potential Use for Close-Range 3D Modeling. Remote Sens. 2015;7:13070–13097. doi: 10.3390/rs71013070. - DOI
MeSH terms
LinkOut - more resources
Full Text Sources
Medical
Miscellaneous