Vision Transformers
Vectorized self-attention in the encoder¶
Self-attention can be written in a compact, vectorized form.
Given a sequence of embeddings stacked in a matrix (each row is a token embedding):
Compute queries, keys, and values:
where .
Compute attention scores:
Apply softmax row-wise:
Compute the output:
This is often written as:
In scaled dot-product attention, we divide by :
which improves numerical stability when is large.
Self-attention can be viewed as a learned, differentiable key–value lookup where each query selects a weighted combination of values based on similarity to keys.
ConvNets vs Transformers (conceptual comparison)¶
The slides highlight high-level differences between convolutional networks and transformers.
Convolutional networks (CNNs)
Operate on grid-structured inputs (e.g. images).
Use local filters and weight sharing across spatial positions.
Implicitly enforce translation invariance:
Convolution kernels depend only on relative position within a local neighborhood.
Build large receptive fields by:
Stacking many layers,
Using pooling or strided convolutions to downsample.
Transformers
Use self-attention to connect all positions:
Any token can attend to any other in one step (global receptive field).
Do not have built-in translation invariance:
Use positional encodings instead of relative positions in the kernel.
Are highly parallelizable across positions.
For images, the question is:
Can we treat an image as a sequence and apply transformers directly, without convolutions?
Vision Transformer (ViT): main idea¶
The core idea of the Vision Transformer is to treat an image as a sequence of patches and apply a standard transformer encoder for classification.
Conceptually:
Split image into patches:
Input image size: .
Choose patch size: (e.g. ).
The image is reshaped into patches, each of size .
Flatten patches:
Each patch is flattened into a vector .
Linear projection:
Each patch vector is mapped to a -dimensional embedding:
Class token:
Prepend a learnable embedding to the sequence.
Its final representation after the transformer encoder is used as the image representation for classification.
Positional embeddings:
Add a learnable 1D positional embedding to each patch (and class) embedding:
Transformer encoder:
Apply a standard transformer encoder (stack of multi-head self-attention + MLP blocks) to the sequence .
Classification head:
Take the final class token from the top encoder layer.
Feed it into an MLP classifier:
Often a small MLP with one hidden layer during pretraining,
Possibly a single linear layer for fine-tuning.
In short:
ViT = Patches → Linear embeddings + class token → Transformer encoder → MLP head.
Vision Transformer architecture in more detail¶
The slides summarize ViT with the following components:
Patch + position embedding:
Image is divided into patches and each is linearly projected to dimension .
A learnable class token is prepended.
Position embeddings are added to each token (including class token).
Transformer encoder (repeated times):
LayerNorm → Multi-head self-attention → residual connection,
LayerNorm → MLP (position-wise feed-forward) → residual connection.
MLP head:
Takes the final representation of the class token,
Outputs class logits (bird, ball, car, ...).
Symbolically, for layer :
Self-attention sublayer:
Feed-forward sublayer:
The depth , hidden size , MLP size, and number of heads are varied across ViT model variants.
ViT model variants and sizes¶
The original ViT paper defines several standard configurations, similar to BERT:
ViT-Base:
Layers: 12
Hidden size : 768
MLP size: 3072
Attention heads: 12
Parameters: M
ViT-Large:
Layers: 24
Hidden size : 1024
MLP size: 4096
Attention heads: 16
Parameters: M
ViT-Huge:
Layers: 32
Hidden size : 1280
MLP size: 5120
Attention heads: 16
Parameters: M
Notation like ViT-L/16:
“L” refers to the Large configuration,
“/16” refers to a patch size of ,
The sequence length is inversely proportional to the square of the patch size:
Smaller patches → longer sequences → higher compute cost for attention.
Key observation:
As in NLP, ViT performance tends to improve with larger models and larger training datasets.
Pretraining and data requirements¶
The slides discuss ViT performance on image classification benchmarks and its dependence on pretraining data size.
Findings:
When trained on mid-sized datasets like ImageNet alone, ViT achieves modest accuracies, often below strong CNN baselines.
When pretrained on very large datasets (e.g. JFT-300M, ImageNet-21k) and then fine-tuned, ViT achieves state-of-the-art or competitive performance.
Example summary:
ViT-L/16 and ViT-H/14 pretrained on JFT-300M outperform strong CNN baselines (e.g. “BiT” ResNets, EfficientNet-L2) on a variety of datasets:
ImageNet,
CIFAR-10/100,
Oxford Pets,
Flowers,
VTAB tasks.
Data efficiency:
ViT has fewer inductive biases for vision than CNNs:
It does not encode translation invariance or locality explicitly.
As a result, ViT behaves similarly to language transformers:
Requires very large pretraining datasets to generalize well.
Benefits strongly from transfer learning: pretrain on massive data, then fine-tune on specific tasks.
Effect of dataset size (qualitative):
With small pretraining datasets (e.g. ImageNet-1k), larger ViT models can underperform smaller ones, because they overfit and cannot fully exploit their capacity.
As pretraining data grows (ImageNet-21k, JFT-300M), larger models start to dominate and yield higher accuracy.
What does ViT learn?¶
The slides show several visualizations from the ViT paper:
Patch embedding filters¶
The first linear projection that maps flattened patches to embeddings can be visualized.
Applying PCA to the learned filters and plotting them as images reveals:
Many filters look like localized edge or color detectors,
Similar to early layers in CNNs.
This indicates that even without explicit convolution, ViT learns patch-level patterns reminiscent of CNN filters.
Attention distance¶
The mean attention distance of each head and layer can be measured (how far, in patch space, a token tends to attend).
Observations:
Some attention heads in lower layers already attend to distant patches, providing a large receptive field early on.
Others focus on nearby patches, capturing local structure.
Analogy:
Attention distance is comparable to the receptive field in CNNs, but:
Self-attention can access global context in a single layer,
CNNs need many layers to build such large receptive fields.
Attention maps¶
By visualizing attention weights from the class token (or from certain heads), we see:
The transformer focuses attention on semantically relevant regions of the image,
E.g. the object of interest (dog, car, bird) rather than background.
These visualizations support the idea that ViT learns meaningful global and local interactions through attention.
Combining CNNs and attention: motivation (CoAtNet)¶
Despite the strong performance of ViTs with massive pretraining, the slides note:
Transformers in vision often lag behind state-of-the-art CNNs on tasks with:
Limited data,
Strong inductive biases needed (e.g. local structure, translation invariance).
Transformers tend to have larger model capacity, but weaker inductive bias:
They may overfit small datasets,
Generalization can be worse compared to CNNs trained on the same data.
Idea:
Combine the strengths of convolutions and self-attention in a single architecture.
Use convolution to capture local patterns and provide strong inductive bias.
Use attention to capture global interactions and long-range dependencies.
CoAtNet is one such hybrid architecture explored in the slides.
Convolution and self-attention: mathematical comparison¶
The slides compare depthwise convolution and self-attention in a unified notation.
Let denote the input feature at spatial position .
Depthwise convolution¶
With a local neighborhood (e.g. a window), depthwise convolution computes:
where:
is a learned kernel weight depending only on the relative position ,
The kernel is input-independent,
The operation is local and translationally invariant.
Self-attention¶
Let denote the set of all positions. Self-attention can be written as:
Here:
The attention weights depend on the content (features),
The operation is global (sum over all positions),
No inherent translation invariance (positions must be encoded separately).
Comparison¶
Kernel:
Convolution: weights are fixed after training and do not depend on input.
Attention: weights are input-dependent and can capture complex relations.
Receptive field:
Convolution: local neighborhood (small receptive field per layer).
Attention: global set (global receptive field in one layer).
Inductive bias:
Convolution: relies on relative positions; strong bias for local, translation-invariant features.
Attention: relies on learned content similarity; more flexible but with weaker structural bias.
This motivates architectures that combine both operations.
Relative self-attention in CoAtNet¶
To combine convolutional and attention-like behaviors, CoAtNet uses relative self-attention.
The idea:
Modify the attention scores by adding a relative positional kernel :
Here:
is the content-based similarity (as in standard self-attention).
is a learnable weight depending on the relative position between and .
The softmax is applied over all positions .
Interpretation:
If is large for nearby positions and small for distant ones, attention is biased toward local neighbors, mimicking convolutional behavior.
If is more uniform, attention can remain global.
The kernel remains input-independent, encoding structural biases, while the dot-product term incorporates input-dependent interactions.
This relative-attention formulation allows CoAtNet to:
Capture complex content-based dependencies,
Maintain useful inductive biases from convolutions (via relative positions).
CoAtNet vertical design: stages and downsampling¶
Applying global self-attention at the pixel level is computationally prohibitive:
Complexity scales as where is the number of tokens (pixels or patches).
CoAtNet addresses this with a stage-wise design similar to CNNs:
Input: image.
Stem (S0):
Convolutional layers downsample to a coarser grid (e.g. ).
Stages S1–S4:
At each stage, spatial resolution is further reduced (e.g. , , , ),
The number of channels is increased.
Within stages:
Early stages (higher resolution) use convolutional blocks:
Standard or depthwise convs,
convs as bottlenecks,
Residual connections.
Later stages (lower resolution) use relative self-attention blocks and feed-forward networks.
The slides mention that good results (in terms of generalization, capacity, and transferability) were obtained with:
Three convolutional blocks/stages, followed by
Two transformer blocks/stages.
Global pooling and a fully connected (FC) layer at the end produce classification logits.
This vertical design:
Keeps early computations efficient and local via convolutions,
Uses attention when the sequence length is reduced enough to make it tractable,
Mimics the progressive downsampling seen in ResNets and other CNNs.
CoAtNet results and trade-offs¶
The slides show comparisons of:
Accuracy vs FLOPs,
Accuracy vs number of parameters,
for CoAtNet and competing models.
Qualitative conclusions:
CoAtNet achieves strong accuracy while maintaining:
Competitive or reduced FLOPs compared to pure transformer or pure CNN variants,
Good parameter efficiency.
By combining convolution and attention:
It benefits from convolutional inductive biases on small/medium datasets,
It leverages attention to capture global interactions and improve performance on challenging benchmarks.
More broadly, hybrid architectures like CoAtNet illustrate that:
Neither pure CNNs nor pure transformers are optimal for all regimes; combining them can yield better accuracy–efficiency trade-offs.
Summary¶
Self-attention and transformer encoders, originally developed for sequences, can be applied to images by:
Splitting images into patches,
Embedding patches and adding positional information,
Prepending a class token and using a transformer encoder.
Vision Transformers (ViT) show that:
Pure transformer architectures can achieve state-of-the-art performance on image classification,
But they require large-scale pretraining due to weaker inductive biases than CNNs.
ViT internal behavior:
Patch embedding layers learn filters similar to early CNN layers,
Some attention heads attend to distant patches even in lower layers,
Attention maps focus on semantically important regions of the image.
Convolution vs attention:
Convolution uses local, input-independent kernels and strong translation-invariance bias,
Self-attention uses global, input-dependent weights but lacks structured biases,
Relative self-attention bridges these by adding learnable relative position terms to attention scores.
CoAtNet and similar hybrids:
Combine convolutional stages for local feature extraction and efficient downsampling,
With transformer stages for global, content-based interactions,
Achieve strong performance and favorable accuracy–efficiency trade-offs.
These ideas provide a conceptual foundation for modern vision architectures that increasingly integrate both convolution and attention mechanisms.