Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Aug 2:6:62.
doi: 10.3389/frobt.2019.00062. eCollection 2019.

Natural Language Processing in Large-Scale Neural Models for Medical Screenings

Affiliations

Natural Language Processing in Large-Scale Neural Models for Medical Screenings

Catharina Marie Stille et al. Front Robot AI. .

Abstract

Many medical screenings used for the diagnosis of neurological, psychological or language and speech disorders access the language and speech processing system. Specifically, patients are asked to fulfill a task (perception) and then requested to give answers verbally or by writing (production). To analyze cognitive or higher-level linguistic impairments or disorders it is thus expected that specific parts of the language and speech processing system of patients are working correctly or that verbal instructions are replaced by pictures (avoiding auditory perception) or oral answers by pointing (avoiding speech articulation). The first goal of this paper is to propose a large-scale neural model which comprises cognitive and lexical levels of the human neural system, and which is able to simulate the human behavior occurring in medical screenings. The second goal of this paper is to relate (microscopic) neural deficits introduced into the model to corresponding (macroscopic) behavioral deficits resulting from the model simulations. The Neural Engineering Framework and the Semantic Pointer Architecture are used to develop the large-scale neural model. Parts of two medical screenings are simulated: (1) a screening of word naming for the detection of developmental problems in lexical storage and lexical retrieval; and (2) a screening of cognitive abilities for the detection of mild cognitive impairment and early dementia. Both screenings include cognitive, language, and speech processing, and for both screenings the same model is simulated with and without neural deficits (physiological case vs. pathological case). While the simulation of both screenings results in the expected normal behavior in the physiological case, the simulations clearly show a deviation of behavior, e.g., an increase in errors in the pathological case. Moreover, specific types of neural dysfunctions resulting from different types of neural defects lead to differences in the type and strength of the observed behavioral deficits.

Keywords: behavioral testing; brain-behavior connection; detailed computer simulations of natural language processes; medical screenings; neurocomputational model; spiking neural networks.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The functional architecture for our large-scale neural model used for two medical screenings (DemTect and WWT). Arrows indicate neural associations between buffers. Buffers within the perception and production pathways allow neural realizations (i.e., neural activation patterns) of S-pointers defined in the mental lexicon and mental syllabary (dashed arrows). S-pointer activity is passed from one buffer to the next within pathways and modules as well as between modules (normal arrows). Short-term memories (recursive buffers) are marked by cursive letters, while all other non-cursive black colored words label non-recursive buffers. Neural associations including cleanup are marked by an extra word attached to the arrow. Different gateways (see green marked word “or”) are controlled by the task control module. The underlined words within the task control module represent specific neural submodules like basal ganglia and thalamus.
Figure 2
Figure 2
Exemplary replication of a WWT 6–10 image (Glück, 2011) which serves as basis for the word naming task.
Figure 3
Figure 3
(A) Similarity values of S-pointer activation occurring in different neuron buffers over time during simulation of a picture naming task of “Schubkarre” (“wheelbarrow”) in the physiological case. Rows indicate neural similarity values of different neural state buffers of our neural model over time (t). Each S-pointer similarity value over time is represented by a trajectory with specific color. A similarity value of an S-pointer at a point in time is the dot-product of that S-pointer with the unity vector in the direction of the most active S-pointer at that point in time. The number of colors is limited, so the same color may occur for different S-pointers. The height of the graph shows the amount of activation. All other buffers defined in the model are present but not shown in this figure for clarity. In the Phonological Production Buffer, the target word is displayed in a phonetic form with the stressed syllable (phonetic transcription with SAMPA, 2005). In row five, words are overlapped as the activation level is very similar. These are co-activation within the word corpus of our model. These items are linked by semantic or associative links to the target item. Furthermore, there are similarity plots for semantic pointers activated in different output buffers of the neural model for the naming of “Schubkarre” (“wheelbarrow”) in the cases of sample run with (B) 60% ablation for the neurons within the concept production buffer; and (C) 80% ablation for the neurons within the associative memory, realizing the neural association from concept to word production buffer. Please use the input buffers of (A) because of the same task and input.
Figure 4
Figure 4
Number of correctly named words (number of correct items) as function of percentage of ablated neurons (A) within the concept production buffer and (B) within the concept-to-word clean up associative memory.
Figure 5
Figure 5
(A) Similarity plot for semantic pointers activated in different buffers of the neural model for word repetition (subtask 1 of DemTect) in the physiological case. Rows indicate neural similarity values of different neural state buffers over time (t). Each S-pointer similarity value over time is represented by a trajectory with specific color. A similarity value of an S-pointer at a point in time is the dot-product of that S-pointer with the unity vector in the direction of the most active S-pointer at that point in time. The number of colors is limited, so the same color may occur for different S-pointers. The height of the graph shows the amount of activation. All other buffers defined in the model are present but not shown in this figure for clarity. Furthermore, there are similarity plots for semantic pointers activated in the Conceptual Production Buffer of the neural model for word repetition (subtask 1 of DemTect) in the cases of a sample run with (B) 30% ablation for the neurons within the conceptual input buffer; (C) 30% ablation for the neurons within the associative memory associating the concept through and concept output buffer; (D) 50% ablation for the neurons within the associative memory associating the concept through and concept output buffer; and (E) 3% ablation for the neurons within the memory buffer (mem). Please use also (A) because of the same task and auditory input.
Figure 6
Figure 6
Number of correctly named words (number of correct items) as function of percentage of ablated neurons (A) within the concept input buffer and (B) within the concept through-to-out clean up associative memory, and (C) within the memory buffer.

Similar articles

Cited by

References

    1. American Psychiatric Association (1996). Diagnostic and Statistical Manual of Mental Disorders, 4th edn. Washington, DC: American Psychiatric Association; (German version by SaßH, Wittchen H-U, Zaudig M. Diagnostisches und Statistisches Manual Psychischer Störungen [DSM-IV]. Göttingen: Hogrefe).
    1. Bekolay T. (2016). Biologically Inspired Methods in Speech Recognition and Synthesis: Closing the Loop. [PhD thesis]: University of Waterloo.
    1. Bekolay T., Bergstra J., Hunsberger E., DeWolf T., Stewart T. C., Rasmussen D., et al. . (2014). Nengo: a Python tool for building large-scale functional brain models. Front. Neuroinformatics 7:48. 10.3389/fninf.2013.00048 - DOI - PMC - PubMed
    1. Blouw P., Solodkin E., Thagard P., Eliasmith C. (2016). Concepts as semantic pointers: a framework and computational model. Cogn. Sci. 40, 1128–1162. 10.1111/cogs.12265 - DOI - PubMed
    1. Bohland J. W., Bullock D., Guenther F. H. (2010). Neural representations and mechanisms for the performance of simple speech sequences. J. Cogn. Neurosci. 22, 1504–1529. 10.1162/jocn.2009.21306 - DOI - PMC - PubMed

LinkOut - more resources