Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Aug 3;8(12):9603-9610.
doi: 10.1109/JIOT.2020.3013710. eCollection 2021 Jun 15.

Adversarial Examples-Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices

Affiliations

Adversarial Examples-Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices

Abdur Rahman et al. IEEE Internet Things J. .

Abstract

Medical IoT devices are rapidly becoming part of management ecosystems for pandemics such as COVID-19. Existing research shows that deep learning (DL) algorithms have been successfully used by researchers to identify COVID-19 phenomena from raw data obtained from medical IoT devices. Some examples of IoT technology are radiological media, such as CT scanning and X-ray images, body temperature measurement using thermal cameras, safe social distancing identification using live face detection, and face mask detection from camera images. However, researchers have identified several security vulnerabilities in DL algorithms to adversarial perturbations. In this article, we have tested a number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs). Our test results show that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks. Finally, we present in detail the AE generation process, implementation of the attack model, and the perturbations of the existing DL-based COVID-19 diagnostic applications. We hope that this work will raise awareness of adversarial attacks and encourages others to safeguard DL models from attacks on healthcare systems.

Keywords: Adversarial examples (AEs); COVID-19; deep learning (DL); medical IoT.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Illustration of an AE as malware on a medical IoT application.
Fig. 2.
Fig. 2.
Design of AE to generate false acceptances (creating the false impression that COVID-19 negative and positive samples are equal) and false rejections (a truly positive COVID-19 sample is labeled negative).
Fig. 3.
Fig. 3.
Illustration of an AE fooling a DL algorithm into either false rejection or false acceptance.
Fig. 4.
Fig. 4.
(Top two rows with red circles) Six COVID-19 DL-based applications have been tested within the scope of this research with the normal recognition rate shown within the red circles. (Bottom two rows with blue circles) After a DL-based adversarial perturbation attack, the recognition ability of DL algorithms is compromised, though human experts can still recognize the actual class.
Fig. 5.
Fig. 5.
Illustration of a nontargeted AE fooling different COVID-19 diagnostic measures by fooling the (a) DL model to recognize COVID-19 from X-ray images, (b) DL model to recognize COVID-19 from CT scan images, and (c) face mask recognition processes.
Fig. 6.
Fig. 6.
Illustration of a targeted AE attacking a DL-based QR code generation system to alter COVID-19 test results to a target color, i.e., green, red, or yellow.
Fig. 7.
Fig. 7.
Test results of AE generation for attacks on radiological media, such as X-ray and CT scan images: batch iterations during (a) epoch 0, (b) epoch 1, (c) epoch 2, (d) epoch 3, and (e) final adversarial loss and distortions values.
Fig. 8.
Fig. 8.
Test results of AE generation: effect of formula image values.

References

    1. Rahman M. A.et al., “Blockchain-based mobile edge computing framework for secure therapy applications,” IEEE Access, vol. 6, pp. 72469–72478, 2018.
    1. Hossain M. S., Muhammad G., and Guizani N., “Explainable AI and mass surveillance system-based healthcare framework to combat COVID-I9 like pandemics,” IEEE Netw., vol. 34, no. 4, pp. 126–132, Jul./Aug. 2020.
    1. Rahman M. A., Hossain M. S., Alrajeh N. A., and Guizani N., “B5G and explainable deep learning assisted healthcare vertical at the edge: COVID-I9 perspective,” IEEE Netw., vol. 34, no. 4, pp. 98–105, Jul./Aug. 2020.
    1. Ren K., Zheng T., Qin Z., and Liu X., “Adversarial attacks and defenses in deep learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020, doi: 10.1016/j.eng.2019.12.012. - DOI
    1. Agarwal A., Singh R., Vatsa M., and Ratha N., “Are image-agnostic universal adversarial perturbations for face recognition difficult to detect?” in Proc. IEEE 9th Int. Conf. Biometrics Theory Appl. Syst. (BTAS), Redondo Beach, CA, USA, 2018, pp. 1–7, doi: 10.1109/BTAS.2018.8698548. - DOI

LinkOut - more resources