Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug 18;17(8):e0273156.
doi: 10.1371/journal.pone.0273156. eCollection 2022.

Bi-directional long short term memory-gated recurrent unit model for Amharic next word prediction

Affiliations

Bi-directional long short term memory-gated recurrent unit model for Amharic next word prediction

Demeke Endalie et al. PLoS One. .

Abstract

The next word prediction is useful for the users and helps them to write more accurately and quickly. Next word prediction is vital for the Amharic Language since different characters can be written by pressing the same consonants along with different vowels, combinations of vowels, and special keys. As a result, we present a Bi-directional Long Short Term-Gated Recurrent Unit (BLST-GRU) network model for the prediction of the next word for the Amharic Language. We evaluate the proposed network model with 63,300 Amharic sentence and produces 78.6% accuracy. In addition, we have compared the proposed model with state-of-the-art models such as LSTM, GRU, and BLSTM. The experimental result shows, that the proposed network model produces a promising result.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Proposed next word prediction model.
Fig 2
Fig 2. Proposed BLSTM-GRU network model for Amharic next word prediction.
Fig 3
Fig 3. Training loss and training accuracy for 30 epoch.
Fig 4
Fig 4. Training and validation loss of the model for 30 epoch.
Fig 5
Fig 5. Sample predicted Amharic words from the provided phrase or sentence.
Fig 6
Fig 6. Comparison of the proposed model with LSTM, GRU, and BLSTM with training time.

References

    1. Selvi Kanimozhi, Ramya S., "Recurrent Neural Network based Models for Word Prediction," International Journal of Recent Technology and Engineering (IJRTE), vol. 8, no. 4, pp. 7433–7437, 2019.
    1. Nadkarni Prakash M, Ohno-Machado Lucila, Chapman Wendy W, "Natural language processing: an introduction," J Am Med Inform Assoc., vol. 8, no. 5, p. 544–551, 2011. doi: 10.1136/amiajnl-2011-000464 - DOI - PMC - PubMed
    1. Demeke Girma A., "The Ethio-Semitic Languages (Re-examining the Classification)," Journal of Ethiopian Studies, vol. 34, no. 2, pp. 57–93, 2001.
    1. Salawu Abiodun, Aseres Asemahagn, "Language policy, ideologies, power and the Ethiopian media," South African Journal for Communication Theory and Research, vol. 41, no. 1, pp. 71–89, 2015.
    1. Gereme Fantahun, Zhu William, Ayall Tewodros, Alemu Dagmawi, "Combating Fake News in “Low-Resource” Languages: Amharic Fake News Detection Accompanied by Resource Crafting," Information, vol. 12, no. 1, pp. 1–20, 2021.

LinkOut - more resources