Attention and Transformers
Sequence processing and recurrent neural networks¶
Many tasks require processing sequences rather than fixed-size vectors:
Language modeling: predict the next word or character.
Machine translation: map an input sentence to an output sentence.
Speech recognition, time series prediction, control problems, etc.
A recurrent neural network (RNN) processes a sequence step by step, maintaining a hidden state that summarizes the past.
A simple (vanilla) RNN cell:
Hidden state update:
where is a nonlinearity (e.g. or ReLU).
Output at time :
where is typically a softmax for classification or identity for regression.
The same parameters are used at every time step.
RNNs support different input–output patterns:
One-to-many: image captioning (one image → sequence of words),
Many-to-one: sentiment classification (sequence → one label),
Many-to-many: sequence labeling, translation (sequence → sequence).
Training RNNs and backpropagation through time¶
To train an RNN, we define a loss over the entire sequence:
For example, in language modeling, sum the cross-entropy over all time steps:
Training uses backpropagation through time (BPTT):
Unroll the RNN over all time steps .
Perform a forward pass to compute all hidden states and outputs.
Backpropagate gradients from the final time step back to the beginning.
Because the gradient has to pass through many repeated multiplications by (and nonlinearities), we get:
Vanishing gradients when eigenvalues of are mostly in magnitude.
Exploding gradients when eigenvalues are in magnitude.
As a consequence:
Simple RNNs struggle to learn long-term dependencies (information far back in time).
They can, in principle, represent such dependencies, but learning them with gradient descent is difficult.
Long-term dependencies and gated RNNs: LSTM and GRU¶
To address vanishing and exploding gradients, gated RNN architectures were introduced.
Long Short-Term Memory (LSTM)¶
An LSTM maintains:
A cell state for long-term memory,
A hidden state for short-term / working memory.
At each time step, it uses gates to control information flow:
Forget gate:
Input gate:
Candidate cell state:
Output gate:
Update equations:
where denotes element-wise multiplication.
Key properties:
The cell state has an additive update, which helps gradients flow over long time spans.
Gates learn to keep, forget, or overwrite information.
Gated Recurrent Unit (GRU)¶
The GRU is a simpler gated RNN without a separate cell state:
Reset (or relevance) gate:
Candidate hidden state:
Update gate:
Update:
Compared to LSTM:
Fewer gates and parameters,
No explicit cell state; hidden state carries both long- and short-term information.
Both LSTMs and GRUs significantly improve the ability to learn long-term dependencies compared to vanilla RNNs.
Multi-layer RNNs¶
RNNs can be stacked to form deep recurrent networks:
The hidden state of layer at time , , becomes the input to layer at the same time step.
For example, with layers:
and so on up to .
Benefits:
Higher layers can capture more abstract features of the sequence.
Deep RNNs (with LSTM or GRU units) often perform better than single-layer ones.
In practice:
High-performing RNN-based models often use a small number of recurrent layers (e.g. 2–4),
Not nearly as deep as modern convolutional or transformer-based architectures.
Word embeddings and distributional semantics¶
Discrete words are often represented as one-hot vectors:
A vocabulary of size ,
Word is represented by a vector with a single 1 and the rest 0.
Problems with one-hot encoding:
No notion of similarity between words,
Vectors are high-dimensional and sparse.
Word embeddings map words to dense vectors:
Learn an embedding matrix ,
Word is represented as (a row of ),
is typically in the hundreds or thousands (e.g. 300, 768, 1536, 3072).
Distributional semantics:
“You shall know a word by the company it keeps.”
Words are embedded so that those appearing in similar contexts have similar vectors (high dot product or cosine similarity).
Embeddings are learned by:
Training language models or skip-gram / CBOW models,
Or as part of larger architectures (e.g. seq2seq, transformers).
These embeddings serve as the input representation for RNNs and transformers.
Sequence-to-sequence models and neural machine translation¶
In neural machine translation (NMT), we model the conditional probability of a target sentence given a source sentence .
An RNN encoder–decoder model works as follows.
Encoder¶
The encoder RNN reads the source sequence and produces hidden states:
The encoder summarizes the source sequence into a context vector :
for example by taking the final hidden state or using a more complex aggregation.
Decoder (basic model without attention)¶
The decoder is another RNN that generates the target sequence word by word:
with each conditional modeled as
where is the decoder hidden state, updated by
Limitations:
The context vector is a fixed-size bottleneck summarizing the entire source sentence.
For long sentences, compressing all information into a single vector can limit performance.
Attention mechanisms were introduced to solve this bottleneck.
Encoder–decoder with attention (align and translate)¶
Instead of using a single context vector for all target words, attention-based models compute a separate context for each target position .
For each target word :
Decoder hidden state:
Conditional probability:
Context vector as a weighted sum of encoder states¶
Let be encoder annotations (e.g. from a bidirectional RNN). The context vector is
where the attention weights describe how much the decoder at position focuses on encoder position .
Weights are computed as
where is an alignment score between decoder state and encoder state :
Here, is a small neural network (e.g. a feed-forward network).
Interpretation:
Attention learns soft alignments between source and target tokens.
The decoder directly looks back at all encoder states, solving the fixed bottleneck problem.
The attention weights provide interpretable alignment maps.
General attention mechanism: queries, keys, values¶
A general way to view attention:
We are given a set of values indexed by keys .
We have a query .
Attention returns a weighted sum of the values, where weights depend on how well the keys match the query.
Analogy: a hashtable or key–value store:
Keys index values ,
Query asks “which values are relevant now?”.
Mathematically:
Compute a score between query and each key, e.g. using a similarity function :
Cosine similarity:
Convert scores into a probability distribution via softmax:
where is a scaling parameter.
Compute the attention output as a weighted sum:
Properties:
Produces a fixed-size representation regardless of the number of values.
The output is a selective summary of the values, determined by the query.
In neural models, queries, keys, and values are learned vectors.
Self-attention¶
In self-attention, queries, keys, and values all come from the same sequence.
Example:
Input sequence of token embeddings: .
For each position , we compute:
where are learned matrices.
Intuition:
Each position in the sequence attends to other positions to gather relevant information.
Self-attention can capture dependencies between tokens regardless of distance (short or long).
Self-attention was first used inside RNN architectures (e.g. adding a memory tape), but in transformers it becomes the core building block without recurrence.
Vectorized self-attention and scaled dot-product attention¶
Given a sequence of input vectors stacked as rows in a matrix :
Compute queries, keys, and values:
where (or similar).
Compute unnormalized attention scores via dot products:
where is the score of token attending to token .
Apply softmax row-wise to get attention weights:
so each row of sums to 1.
Compute the attention output:
This is often written compactly as:
Scaled dot-product attention¶
For large , dot products can have large variance, making softmax too peaked or unstable.
To stabilize, divide by :
This is the scaled dot-product attention used in transformers.
Attention plus feed-forward layers¶
Self-attention alone is a linear operation with respect to the values (no element-wise non-linearities inside).
To enhance expressiveness, transformers add a position-wise feed-forward network after attention:
For each position :
Take the attention output vector .
Apply a small MLP (often two linear layers with a nonlinearity in between):
where is typically ReLU or GELU.
This feed-forward network operates independently at each position, but with shared parameters across positions.
Thus a transformer layer combines:
Multi-head self-attention for contextual mixing across positions,
Position-wise feed-forward networks for nonlinear transformations at each position.
Residual connections and layer normalization¶
To train deep transformer stacks effectively, three key techniques are used:
Residual connections¶
Instead of learning a mapping directly, layers learn a residual function and add the input back:
In transformer layers, residual connections wrap both the attention and the feed-forward sublayers:
add ,
then add .
Residual connections help:
Maintain information as it flows through many layers,
Improve gradient flow during backpropagation.
Layer normalization¶
Layer normalization normalizes the activations across the features of a layer for each example:
For a vector (e.g. the features at a given position), compute mean and variance:
Normalize and rescale:
with learnable parameters and .
Layer normalization:
Stabilizes training by reducing internal covariate shift,
Replaces batch normalization in transformer-style models (works well with variable-length sequences and small batches).
Positional encodings¶
Self-attention treats the input as a set: the computation
does not depend on the order of positions; it is permutation-invariant with respect to the values.
However, for language and many other sequences, order matters. To introduce order information, we add a positional encoding to each token embedding.
Let:
be the embedding of token at position ,
be its positional encoding.
We define:
Then is used as input to the transformer (for queries, keys, and values).
Sinusoidal positional encodings¶
One popular choice uses fixed sinusoidal functions:
For model dimension :
For even indices :
For odd indices :
Properties:
Different frequencies encode different granularities of position.
The representation is periodic in a controlled way, which can help extrapolate to longer sequences.
These encodings are fixed (not learned), though learned positional embeddings are also common.
Multi-head attention¶
Single-head attention allows each position to attend to others using a single similarity pattern.
However, we may want to focus on different aspects of the input simultaneously (e.g. syntax vs semantics, local vs global context).
Multi-head attention uses multiple attention heads in parallel:
For heads:
For head :
with learned projections .
Concatenate all heads:
Apply a final linear projection:
Benefits:
Each head can capture different types of relationships:
Short-range vs long-range,
Different dependency types,
Different subspaces of representation.
Overall, it increases model capacity without dramatically increasing depth.
Transformer encoder architecture¶
The transformer encoder is a stack of identical layers, each consisting of:
Multi-head self-attention.
Residual connection and layer normalization.
Position-wise feed-forward network.
Another residual connection and layer normalization.
For an encoder layer, with input (a sequence of vectors):
Self-attention sublayer:
Feed-forward sublayer:
The encoder input is:
Token embeddings plus positional encodings.
Stacking several such layers yields deep contextual representations for the input sequence, to be used by decoders or other heads.
Transformer decoder architecture¶
The decoder also consists of stacked layers, each with three main sublayers:
Masked multi-head self-attention (over the decoder inputs).
Multi-head cross-attention (encoder–decoder attention).
Position-wise feed-forward network.
Each sublayer is wrapped in residual connections and layer normalization.
Let be the decoder input representations (shifted target embeddings plus positional encodings), and the encoder outputs.
For a decoder layer:
Masked self-attention (causal masking):
The decoder at position should not attend to positions (future tokens).
Implemented by masking out scores in before softmax.
Sub-layer:
Encoder–decoder (cross) attention:
Queries come from decoder (),
Keys and values come from encoder outputs : $$ \tilde{Z}_2 = \text{LayerNorm}\big(\tilde{Z}_1
\text{MultiHeadAttn}(\tilde{Z}_1, H, H)\big). $$
Feed-forward sublayer:
Finally, a linear layer followed by softmax projects decoder outputs to vocabulary logits for next-token prediction.
Key idea:
The decoder uses self-attention to model dependencies within the target sequence,
Cross-attention to condition on the entire encoded source, solving the bottleneck and vanishing gradient issues inherent in pure RNN-based seq2seq models.
Transformer design goals and complexity¶
The transformer architecture was designed with three main goals:
Low per-layer computational complexity (compared to RNNs).
Short path length between any pair of positions (facilitating long-range dependencies).
High parallelizability (important for GPU/TPU acceleration).
Rough comparisons (for sequence length , model dimension , convolution kernel size ):
Self-attention:
Complexity per layer: (due to ),
Sequential operations: ,
Maximum path length: (any position can attend to any other in one step).
Recurrent layers:
Complexity per layer: ,
Sequential operations: (cannot parallelize across time),
Maximum path length: .
Convolutional layers:
Complexity per layer: ,
Sequential operations: ,
Maximum path length: (stacked convolutions expand receptive field).
Conclusion:
Self-attention trades complexity for constant path length and high parallelism.
For many tasks with moderate sequence lengths and sufficient compute, this trade-off is extremely favorable, enabling large-scale pretraining and very deep models.
Summary¶
RNNs process sequences with hidden state but struggle with long-term dependencies due to vanishing/exploding gradients.
LSTMs and GRUs introduce gates and additive memory paths to mitigate these issues.
Seq2seq models with encoder–decoder architectures can perform neural machine translation, but early models suffered from a fixed-size bottleneck.
Attention mechanisms let models compute context-dependent weighted sums over representations, solving the bottleneck and improving performance and interpretability.
Self-attention extends attention to interactions within a single sequence and is the core component of transformers.
Transformers rely on:
Multi-head self-attention to model rich dependencies,
Residual connections and layer normalization for deep, stable training,
Scaled dot-product attention for numerical stability,
Positional encodings to inject order information.
The transformer encoder and decoder architectures replace recurrence with stacks of attention and feed-forward layers, enabling:
Highly parallel computation,
Short paths between tokens,
Efficient modeling of long-range dependencies.
These ideas underpin modern large language models and many attention-based architectures in vision, speech, and beyond.