Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Sep 25;24(19):6208.
doi: 10.3390/s24196208.

Artificial Intelligence for the Evaluation of Postures Using Radar Technology: A Case Study

Affiliations

Artificial Intelligence for the Evaluation of Postures Using Radar Technology: A Case Study

Davide De Vittorio et al. Sensors (Basel). .

Abstract

In the last few decades, major progress has been made in the medical field; in particular, new treatments and advanced health technologies allow for considerable improvements in life expectancy and, more broadly, in quality of life. As a consequence, the number of elderly people is expected to increase in the following years. This trend, along with the need to improve the independence of frail people, has led to the development of unobtrusive solutions to monitor daily activities and provide feedback in case of risky situations and falls. Monitoring devices based on radar sensors represent a possible approach to tackle postural analysis while preserving the person's privacy and are especially useful in domestic environments. This work presents an innovative solution that combines millimeter-wave radar technology with artificial intelligence (AI) to detect different types of postures: a series of algorithms and neural network methodologies are evaluated using experimental acquisitions with healthy subjects. All methods produce very good results according to the main parameters evaluating performance; the long short-term memory (LSTM) and GRU show the most consistent results while, at the same time, maintaining reduced computational complexity, thus providing a very good candidate to be implemented in a dedicated embedded system designed to monitor postures.

Keywords: LSTM; artificial intelligence; embedded systems; fall detection; posture analysis; radar technology.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Visual output: the red circle represents the sensing device and shows its positioning in the volume under monitoring; the yellow and orange circles are the spots indicating that two people are in the room, while the blue small circles form the point clouds. In this image, only the person on the right (orange spot) has a large and well-defined point cloud, while the person identified with the yellow spot has only a few points in the cloud.
Figure 2
Figure 2
RNN principle of functioning and architecture of the LSTM cell.
Figure 3
Figure 3
Pseudo-code for LSTM, Bi-LSTM, projected LSTM and GRU, in the case of the subdivision between the training and test sets (a) and for the leave-one-out approach (b). Line 6 differs in the two cases, since, in (a), there is the subdivision into training and test sets with all ratios previously mentioned; in (b), instead, it considers the single subject left out from the training, following the leave-one-out approach. Line 8, in addition, has been written in a generalized way since it depends on the DL approach considered (written in italics, i.e., LSTM).
Figure 4
Figure 4
Room where the first experimental tests were performed, shown from different angles. The device can be seen in the top-right corner of the third image, on the right of the page (highlighted by the red circle).
Figure 5
Figure 5
Second room, where all other experimental tests were performed, shown from different angles. The device can be seen in the top-right corner of the second image, on the right of the page, stuck to the wall over the door (highlighted by the red circle). Since, here, there was more room for movement, walking, sitting and falling tests were conducted.
Figure 6
Figure 6
The figure shows a person randomly walking in the room. The graph shows the 3 spatial coordinates (x in red, y in green and z in blue), with their maximum (red circle), minimum (blue circle) and mean values (red cross). As can be seen, the z coordinate is reduced when the subject moves closer to the sensor, as shown by the other two coordinates, x and y, having smaller values as well.
Figure 7
Figure 7
The image on the left displays a distortion in the point cloud and also a double spot, which could be mistaken as indicating two people in the room. The schematic body reconstruction clearly highlights that, without prior knowledge of the measurement context, the situation could be easily wrongly interpreted. The three-dimensional representation on the right shows that, even with no one in the room, metallic furniture produces reflections, resulting in an actual (albeit small) point cloud.
Figure 8
Figure 8
The figures show the speed on the two axes, x and y, related to a person randomly walking in the room in (a,b), and falling in (c,d). The crosses always represent the mean value of the corresponding curve. In (c), the position along the three axes is reported, and, in (d), the speed of the fall is observed. In this case, the legend of colors and indicators is the same as in Figure 6. Compared to walking, a fall presents a very rapid increase in speed, followed by a prolonged stop.
Figure 9
Figure 9
(a,b) show the same test as presented in Figure 8c,d, while (c) presents another experiment of a person falling. The crosses always represent the mean value of the corresponding curve. The movement graphs are associated with the corresponding spot and number of points in the cloud for each frame. The device works by collecting 10 frames/s. In both cases, the evolution is very similar, as can be derived in (b,c), respectively. Here, the person is identified by the system after a short transient period according to the blue line. The number of points in the cloud is given at any frame by the orange graph and clearly shows that, after the person is detected, the number decreases, and it is considerably reduced when the fall occurs, potentially causing problems in reconstructing the point cloud.
Figure 10
Figure 10
(a) is the graphical representation of two classes of output, standing and falling, where the colored circles are those from the training set, while the others are the detected ones. (b) is the same representation with the addition of the sitting posture.
Figure 11
Figure 11
Same representations as in Figure 10. Here, the classes appear more separated compared to the previous method, but the results are very similar.
Figure 12
Figure 12
Two-class detection was performed between falling and standing upright. As in the previous approaches, the behavior of the algorithm is good and it allows one to discriminate between postures.
Figure 13
Figure 13
The images show two people in the same room that are in an upright position and periodically walk. On the left, the software correctly detects both of them, each with a single spot and corresponding point cloud. On the right, the image presents one subject with two associated spots, of which only the red one is correct, while the purple circle is an artifact.
Figure 14
Figure 14
The image shows two people in a room, where the one on the left suffers from an artifact: the system loses the detection of the person for a few frames, and, when it recovers (image on the right), the reconstruction is altered towards the floor with the spot created at a very low level, which is incompatible with a person standing.
Figure 15
Figure 15
A single person standing (a,c) and sitting (b,d): in the second case the point cloud is compacted to the most reflecting part of the body, the upper torso. This is the reason that the concentration of points is localized higher than the center of gravity of the person. (c,d) show also the corresponding confidence ellipses, with very different shapes and eccentricities, since, for the seated position, it resembles a circle.
Figure 16
Figure 16
Output of the LSTM method considering the three postures. Sitting is shown in green, the fall is shown in blue and the upright position is shown in red. As above, the training sets are denoted by the fully colored circles, whereas the others denote the test sets.
Figure 17
Figure 17
Confusion matrices for all AI methods considering all postures for the 50-50 ratio between the training and test sets: (a) LSTM; (b) Bi-LSTM; (c) projected LSTM; (d) GRU.
Figure 18
Figure 18
Confusion matrices for all AI methods considering all postures for the 60-40 ratio between the training and test sets: (a) LSTM; (b) Bi-LSTM; (c) projected LSTM; (d) GRU.
Figure 19
Figure 19
Confusion matrices for all AI methods considering all postures for the 70-30 ratio between the training and test sets: (a) LSTM; (b) Bi-LSTM; (c) projected LSTM; (d) GRU.
Figure 20
Figure 20
Confusion matrices for all AI methods considering all postures for the 80-20 ratio between the training and test sets: (a) LSTM; (b) Bi-LSTM; (c) projected LSTM; (d) GRU.
Figure 21
Figure 21
Confusion matrices for all AI methods considering all postures for the 90-10 ratio between the training and test sets: (a) LSTM; (b) Bi-LSTM; (c) projected LSTM; (d) GRU. As is clearly shown, the results are very promising, with slightly better performance in the cases of LSTM and Bi-LSTM.
Figure 22
Figure 22
Confusion matrices for all AI methods considering only seated and upright postures: (a) LSTM; (b) Bi-LSTM; (c) projected LSTM; (d) GRU. In this case, the results are even more comparable than in Figure 17, possibly because a person lying on the floor after a fall does not assume a precise posture, while sitting and standing are more stable positions.

References

    1. United Nations . World Population Prospects (2024 Revision) Department of Economic and Social Affairs Population Division; New York, NY, USA: 2024.
    1. Guerra B.M.V., Torti E., Marenzi E., Schmid M., Ramat S., Leporati F., Danese G. Ambient Assisted Living for Frail People through Human Activity Recognition: State-of-the-Art, Challenges and Future Directions. Front. Neurosci. 2023;17:1256682. doi: 10.3389/fnins.2023.1256682. - DOI - PMC - PubMed
    1. Blackman S., Matlo C., Bobrovitskiy C., Waldoch A., Fang M.L., Jackson P., Mihailidis A., Nygård L., Astell A., Sixsmith A. Ambient Assisted Living Technologies for Aging Well: A Scoping Review. J. Intell. Syst. 2016;25:55–69. doi: 10.1515/jisys-2014-0136. - DOI
    1. Bahbouh N.M., Abi Sen A.A., Alsehaimi A.A.A., Alsuhaymi E.A. A Framework for Supporting Ambient Assisted Living for Users of Special Needs; Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom); New Delhi, India. 23–25 March 2022; pp. 427–432.
    1. Guerra B.M.V., Ramat S., Beltrami G., Schmid M. Recurrent Network Solutions for Human Posture Recognition Based on Kinect Skeletal Data. Sensors. 2023;23:5260. doi: 10.3390/s23115260. - DOI - PMC - PubMed