Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011;11(12):11581-604.
doi: 10.3390/s111211581. Epub 2011 Dec 12.

Towards smart homes using low level sensory data

Affiliations

Towards smart homes using low level sensory data

Asad Masood Khattak et al. Sensors (Basel). 2011.

Abstract

Ubiquitous Life Care (u-Life care) is receiving attention because it provides high quality and low cost care services. To provide spontaneous and robust healthcare services, knowledge of a patient's real-time daily life activities is required. Context information with real-time daily life activities can help to provide better services and to improve healthcare delivery. The performance and accuracy of existing life care systems is not reliable, even with a limited number of services. This paper presents a Human Activity Recognition Engine (HARE) that monitors human health as well as activities using heterogeneous sensor technology and processes these activities intelligently on a Cloud platform for providing improved care at low cost. We focus on activity recognition using video-based, wearable sensor-based, and location-based activity recognition engines and then use intelligent processing to analyze the context of the activities performed. The experimental results of all the components showed good accuracy against existing techniques. The system is deployed on Cloud for Alzheimer's disease patients (as a case study) with four activity recognition engines to identify low level activity from the raw data captured by sensors. These are then manipulated using ontology to infer higher level activities and make decisions about a patient's activity using patient profile information and customized rules.

Keywords: accelerometer; activity recognition; location sensor; u-healthcare; video sensor.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Architecture of Human Activity Recognition Engine (HARE).
Figure 2.
Figure 2.
(A) Shows sample segmentation of inhomogeneous body-shape object using active contours. (a) Initial contour; (b) result of CV AC [12]; and (c) result of our approach. (B) Shows the architecture of our approach for motion feature extraction and recognition.
Figure 3.
Figure 3.
(A) Quantization module, where (B) Training and Testing module.
Figure 4.
Figure 4.
(A) Average time needed for computing 100 gradients against [18]; (B) Comparison of recognition results; (C) Single day recognized routines.
Figure 4.
Figure 4.
(A) Average time needed for computing 100 gradients against [18]; (B) Comparison of recognition results; (C) Single day recognized routines.
Figure 5.
Figure 5.
Test Bed for u-Life Care and deployment of Zio Access Point (WLB5254AP) to capture the location of subject (red dot).
Figure 6.
Figure 6.
Precision and Recall of CAME for match making vs. number of performed experiments.
Figure 7.
Figure 7.
Comparison between CAME and extended CAME precision for 12 different experiments with increasing numbers of activities.
Figure 8.
Figure 8.
(A) Shows the overall scenario designed at a patient home. The other images are the images captured from the video demonstration of SC3 for Alzheimer’s patients; (B) Shows that patient is sitting and looking towards the TV, SC3 detects posture and turns on the TV for him; In C, D, E, and F, the body sensor detects that patient is performing eating, tooth brushing, reading, and taking medicine activities. During the reading activity, the system generates a reminder for the patient to exercise and in G; the subject is exercising, which is detected by camera-based sensors; In H, I, and J, the doctor and nurse discuss the patient’s condition and revise the patient’s medications.
Figure 8.
Figure 8.
(A) Shows the overall scenario designed at a patient home. The other images are the images captured from the video demonstration of SC3 for Alzheimer’s patients; (B) Shows that patient is sitting and looking towards the TV, SC3 detects posture and turns on the TV for him; In C, D, E, and F, the body sensor detects that patient is performing eating, tooth brushing, reading, and taking medicine activities. During the reading activity, the system generates a reminder for the patient to exercise and in G; the subject is exercising, which is detected by camera-based sensors; In H, I, and J, the doctor and nurse discuss the patient’s condition and revise the patient’s medications.

References

    1. Buyya R., Yeo C.S., Venugopal S., Broberg J., Brandic I. Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Gener. Comput. Syst. 2009;25:599–616.
    1. Le X.H., Lee S., Truc P., Vinh L.T., Khattak A.M., Han M., Hung D.V., Hassan M.M., Kim M., Koo K.-H., Lee Y.-K., Huh E.-N. Secured WSN-integrated cloud computing for u-life care. Proceedings of the 7th IEEE Consumer Communications and Networking Conference (CCNC); Las Vegas, NV, USA. 9–12 January 2010.
    1. Khattak A.M., Vinh L.T., Hung D.V., Truc P.T.H., Hung L.X., Guan D., Pervez Z., Han M., Lee S.Y., Lee Y.K. Context-aware human activity recognition and decision making. Proceedings of the 12th International Conference on e-Health Networking, Application Services (IEEE HealthCom 2010); Lyon, France. July 2010.
    1. Wang F., Turner K.J. An ontology-based actuator discovery and invocation framework in home care systems. Proceedings of the 7th International Conference on Smart Homes and Health Telematics; Berlin, Germany. June 2009; pp. 66–73.
    1. Henricksen K., Indulska J. Modelling and using imperfect context information. Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops; Washington, DC, USA. March 2004.

Publication types

MeSH terms

LinkOut - more resources