Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Sep 12:13:102946.
doi: 10.1016/j.mex.2024.102946. eCollection 2024 Dec.

A critical review of RNN and LSTM variants in hydrological time series predictions

Affiliations
Review

A critical review of RNN and LSTM variants in hydrological time series predictions

Muhammad Waqas et al. MethodsX. .

Abstract

The rapid advancement in Artificial Intelligence (AI) and big data has developed significance in the water sector, particularly in hydrological time-series predictions. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks have become research focal points due to their effectiveness in modeling non-linear, time-variant hydrological systems. This review explores the different architectures of RNNs, LSTMs, and Gated Recurrent Units (GRUs) and their efficacy in predicting hydrological time-series data.•RNNs are foundational but face limitations such as vanishing gradients, which impede their ability to model long-term dependencies. LSTMs and GRUs have been developed to overcome these limitations, with LSTMs using memory cells and gating mechanisms, while GRUs provide a more streamlined architecture with similar benefits.•The integration of attention mechanisms and hybrid models that combine RNNs, LSTMs, and GRUs with other Machine learning (ML) and Deep Learning (DL) has improved prediction accuracy by capturing both temporal and spatial dependencies.•Despite their effectiveness, practical implementations of these models in hydrological time series prediction require extensive datasets and substantial computational resources. Future research should develop interpretable architectures, enhance data quality, incorporate domain knowledge, and utilize transfer learning to improve model generalization and scalability across diverse hydrological contexts.

Keywords: Artificial intelligence; Deep learning; Hydrological prediction; Long short-term memory; RNN and LSTM variants for Time Series Prediction.; Recurrent neural networks.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figures

Image, graphical abstract
Graphical abstract
Fig. 1
Fig. 1
(a): Different DL models and (b): the iterative DL prediction process.
Fig. 2
Fig. 2
Flow Diagram of a Recurrent Neural Network (the sequential processing of inputs, hidden states, and outputs across time steps).
Fig. 3
Fig. 3
Architectures of general FFNNs and RNNs.
Fig. 4
Fig. 4
Vanishing gradient problem for RNNs.
Fig. 5
Fig. 5
The difference between LSTM and RNN (a): RNN contains a single layer, and (b): LSTM contains four interacting layers.
Fig. 6
Fig. 6
(a): In an LSTM model, many key components contribute to its functionality. The memory cell includes input gates, output gates, forget gates, and input nodes, each playing a crucial role in regulating information flow (b): The internal state of the memory cell maintains a record of past information, updated through interactions with these gates. (c): the hidden state of the LSTM model serves as its output, capturing the network's learned representations and insights from the input sequence.
Fig. 7
Fig. 7
Difference between LSTM and GRU.
Fig. 8
Fig. 8
Architecture of Gated Recurrent Unit.
Fig. 9
Fig. 9
The architecture of Bi-LSTM.
Fig. 10
Fig. 10
The architecture of stacked LSTM.
Fig. 11
Fig. 11
The architecture of Convolutional LSTM memory cell.

Similar articles

Cited by

References

    1. Najafabadi M.M., et al. Deep learning applications and challenges in big data analytics. J. Big Data. 2015;2:1–21.
    1. Schuster M., Paliwal K.K. Bidirectional recurrent neural networks. IEEE Transact. Signal Process. 1997;45(11):2673–2681.
    1. Salehinejad H., et al. 2017. arXiv preprint.
    1. Medsker L.R., Jain L. Recurrent neural networks. Des. Applic. 2001;5(64–67):2.
    1. Grossberg S. Recurrent neural networks. Scholarpedia. 2013;8(2):1888.

LinkOut - more resources