Recurrent Neural Networks (RNN)

« Back to Glossary Index

Recurrent Neural Networks (RNNs) are a type of neural network architecture designed for processing sequential data, such as text, speech, or time-series data. Unlike traditional neural networks, RNNs have connections that allow information to persist, meaning the network’s previous outputs are fed back into the model as inputs for the next step. This ability makes RNNs especially suitable for tasks where context from prior steps influences the current prediction, such as in natural language processing (NLP) and speech recognition.

RNNs are able to handle sequences of varying lengths, learning patterns and dependencies over time. However, they can struggle with long-range dependencies due to issues like vanishing gradients, which have led to the development of more advanced architectures like Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU).