FLMatchQA: a recursive neural network-based question answering with customized federated learning model
- PMID: 38983225
- PMCID: PMC11232604
- DOI: 10.7717/peerj-cs.2092
FLMatchQA: a recursive neural network-based question answering with customized federated learning model
Abstract
More sophisticated data access is possible with artificial intelligence (AI) techniques such as question answering (QA), but regulations and privacy concerns have limited their use. Federated learning (FL) deals with these problems, and QA is a viable substitute for AI. The utilization of hierarchical FL systems is examined in this research, along with an ideal method for developing client-specific adapters. The User Modified Hierarchical Federated Learning Model (UMHFLM) selects local models for users' tasks. The article suggests employing recurrent neural network (RNN) as a neural network (NN) technique for learning automatically and categorizing questions based on natural language into the appropriate templates. Together, local and global models are developed, with the worldwide model influencing local models, which are, in turn, combined for personalization. The method is applied in natural language processing pipelines for phrase matching employing template exact match, segmentation, and answer type detection. The (SQuAD-2.0), a DL-based QA method for acquiring knowledge of complicated SPARQL test questions and their accompanying SPARQL queries across the DBpedia dataset, was used to train and assess the model. The SQuAD2.0 datasets evaluate the model, which identifies 38 distinct templates. Considering the top two most likely templates, the RNN model achieves template classification accuracy of 92.8% and 61.8% on the SQuAD2.0 and QALD-7 datasets. A study on data scarcity among participants found that FL Match outperformed BERT significantly. A MAP margin of 2.60% exists between BERT and FL Match at a 100% data ratio and an MRR margin of 7.23% at a 20% data ratio.
Keywords: Accuracy; Artificial intelligence; Data science; Exact match; F1 score; Federated learning; Machine learning; Natural language processing; Neural network; Question answering.
©2024 Saranya and Amutha.
Conflict of interest statement
The authors declare there are no competing interests.
Figures









References
-
- Abebe Fenta A. Vector representation of amharic idioms for natural language processing applications using machine learning approach. Machine Learning Research. 2023;8(2):17–22. doi: 10.11648/j.mlr.20230802.11. - DOI
-
- Azad HK, Deepak A. Query expansion techniques for information retrieval: a survey. Information Processing & Management. 2019;56(5):1698–1735. doi: 10.1016/j.ipm.2019.05.009. - DOI
-
- Bao X, Su C, Xiong Y, Huang W, Hu Y. FLChain: a blockchain for auditable federated learning with trust and incentive. 2019 5th international conference on big data computing and communications (BIGCOM); 2019. - DOI
-
- Bonawitz K, Kairouz P, McMahan B, Ramage D. Federated learning and privacy. Queue. 2021;19(5):87–114. doi: 10.1145/3494834.3500240. - DOI
-
- Casado FE, Lema D, Criado MF, Iglesias R, Regueiro CV, Barro S. Concept drift detection and adaptation for federated and continual learning. Multimedia Tools and Applications. 2021;81:3397–3419. doi: 10.1007/s11042-021-11219-x. - DOI
LinkOut - more resources
Full Text Sources