Infusing Expert Knowledge Into a Deep Neural Network Using Attention Mechanism for Personalized Learning Environments
- PMID: 35719689
- PMCID: PMC9203682
- DOI: 10.3389/frai.2022.921476
Infusing Expert Knowledge Into a Deep Neural Network Using Attention Mechanism for Personalized Learning Environments
Abstract
Machine learning models are biased toward data seen during the training steps. The models will tend to give good results in classes where there are many examples and poor results in those with few examples. This problem generally occurs when the classes to predict are imbalanced and this is frequent in educational data where for example, there are skills that are very difficult or very easy to master. There will be less data on students that correctly answered questions related to difficult skills and who incorrectly answered those related to skills easy to master. In this paper, we tackled this problem by proposing a hybrid architecture combining Deep Neural Network architectures- especially Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN)-with expert knowledge for user modeling. The proposed solution uses attention mechanism to infuse expert knowledge into the Deep Neural Network. It has been tested in two contexts: knowledge tracing in an intelligent tutoring system (ITS) called Logic-Muse and prediction of socio-moral reasoning in a serious game called MorALERT. The proposed solution is compared to state-of-the-art machine learning solutions and experiments show that the resulting model can accurately predict the current student's knowledge state (in Logic-Muse) and thus enable an accurate personalization of the learning process. Other experiments show that the model can also be used to predict the level of socio-moral reasoning skills (in MorALERT). Our findings suggest the need for hybrid neural networks that integrate prior expert knowledge (especially when it is necessary to compensate for the strong dependency-of deep learning methods-on data size or the possible unbalanced datasets). Many domains can benefit from such an approach to building models that allow generalization even when there are small training data.
Keywords: attention; deep learning; expert knowledge; hybrid neural networks; logical reasoning skill; socio-moral reasoning skill; user modeling.
Copyright © 2022 Tato and Nkambou.
Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Figures
References
-
- Bahdanau D., Cho K., Bengio Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. 10.48550/arXiv.1409.0473 - DOI
-
- Bhunia A. K., Khan S., Cholakkal H., Anwer R. M., Khan F. S., Shah M. (2021). Handwriting transformers, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 1086–1094. 10.1109/ICCV48922.2021.00112 - DOI
-
- Birk M. V., Toker D., Mandryk R. L., Conati C. (2015). Modeling motivation in a social network game using player-centric traits and personality traits, in International Conference on User Modeling, Adaptation, and Personalization (Cham: Springer; ), 18–30. 10.1007/978-3-319-20267-9_2 - DOI
LinkOut - more resources
Full Text Sources
