The Transformer architecture is the bedrock of modern Large Language Models (LLMs). While its high-level success is widely known, a deeper understanding requires dissecting its core components. This article provides a detailed, technical breakdown of the fundamental concepts within a Transformer block, from the notion of “channels” to the intricate workings of the attention mechanism and its relationship with other advanced architectures like Mixture of Experts.
1. The “Channel”: A Foundational View of d_model
Link to heading
In deep learning, a “channel” can be thought of as a feature dimension. While this term is common in Convolutional Neural Networks for images (e.g., Red, Green, Blue channels), in LLMs, the analogous concept is the model’s primary embedding dimension, commonly referred to as d_model
.
An input text is first tokenized, and each token is mapped to a vector of size d_model
(e.g., 4096). Each of the 4096 dimensions in this vector can be considered a “channel,” representing a different semantic or syntactic feature of the token.
As this data, represented by a tensor of shape [batch_size, sequence_length, d_model]
, progresses through the layers of the Transformer, these channels are continuously transformed. However, a critical design choice is that the output dimension of every main sub-layer (like the attention block or the FFN block) is also d_model
. This consistency is essential for enabling residual connections, where the input to a block is added to its output (output = input + SubLayer(input)
). This technique is vital for training the extremely deep networks common today.
2. The Building Blocks: Dimensions of Key Layers Link to heading
A Transformer layer is primarily composed of two sub-layers: a Multi-Head Attention block and a position-wise Feed-Forward Network (FFN). The parameters for these are stored in several key weight matrices. Understanding their dimensions is crucial.
Let’s define our variables:
d_model
: The core embedding dimension.d_ff
: The inner dimension of the FFN, typically4 * d_model
.h
: The number of attention heads.d_head
: The dimension of each attention head, whered_model = h * d_head
.
The dimensions of the weight matrices are as follows:
Layer | Weight Matrix | Input Vector Shape | Output Vector Shape | Weight Matrix Dimension |
---|---|---|---|---|
Attention Projections | ||||
Query | W_Q | d_model | d_model | [d_model, d_model] |
Key | W_K | d_model | d_model | [d_model, d_model] |
Value | W_V | d_model | d_model | [d_model, d_model] |
Output | W_O | d_model | d_model | [d_model, d_model] |
Feed-Forward Network | ||||
Layer 1 (Up-projection) | W_ff1 | d_model | d_ff | [d_model, d_ff] |
Layer 2 (Down-projection) | W_ff2 | d_ff | d_model | [d_ff, d_model] |
3. Deconstructing Multi-Head Attention (MHA) Link to heading
The core innovation of the Transformer is Multi-Head Attention. It allows the model to weigh the importance of different tokens in the sequence from multiple perspectives simultaneously.
3.1. The “Why”: Beyond a Single Attention Link to heading
A single attention mechanism would force the model to average all types of linguistic relationships into one pattern. MHA avoids this by creating h
parallel subspaces. Each “head” can specialize, with one head learning syntactic dependencies, another tracking semantic similarity, and so on. This creates a much richer representation.
3.2. An Encoding/Decoding Analogy Link to heading
A powerful way to conceptualize the attention calculation is as a two-stage process:
- Encoding Relationships: The first part of the calculation,
softmax(Q @ K.T)
, can be seen as an encoding step. It does not use the actual “content” of the tokens (theV
vectors). Instead, it uses the Queries and Keys to build a dynamic “relationship map” between tokens in the sequence. This map, a matrix of attention scores, answers the question: “For each token, how important is every other token right now?” - Decoding via Information Retrieval: The second part,
scores @ V
, acts as a decoding step. It uses the relationship map to retrieve and synthesize information. For each token, it creates a new vector by taking a weighted sum of all theV
vectors in the sequence, using the scores as the precise mixing recipe. It decodes the relational structure into a new, context-aware representation.
3.3. The “How”: A Step-by-Step Flow Link to heading
The MHA process is designed for maximum computational efficiency.
- Initial Projections: The input vectors (shape
[seq_len, d_model]
) are multiplied byW_Q
,W_K
, andW_V
. These matrices are all[d_model, d_model]
not to create one large query, but to efficiently compute the vectors for allh
heads at once. The single large output vector is then reshaped intoh
separate vectors, each of sized_head
. - Attention Score Calculation: For each head
i
, a score matrix is calculated:scores_i = softmax( (Q_i @ K_i.T) / sqrt(d_head) )
. Note thatQ_i
andK_i
have dimensions[seq_len, d_head]
, so the resultingscores_i
matrix has a dimension of[seq_len, seq_len]
. - Weighted Value Calculation: The scores are used to create a weighted sum of the Value vectors for each head:
output_i = scores_i @ V_i
. Sincescores_i
is[seq_len, seq_len]
andV_i
is[seq_len, d_head]
, the resultingoutput_i
has a dimension of[seq_len, d_head]
. This is the final output of a single head. - Concatenation and Final Projection: The outputs of all
h
heads are concatenated along the last dimension. This produces a single large matrix of shape[seq_len, h * d_head]
, which is equivalent to[seq_len, d_model]
. This matrix is then passed through the final output projection layer,W_O
(shape[d_model, d_model]
), to produce the attention block’s final output. TheW_O
matrix learns the optimal way to mix the information from all the specialized heads into a single, unified representation.
4. Optimizing Attention: GQA and MQA Link to heading
During inference, storing the Key and Value vectors for all previous tokens (the KV Cache) is a major memory bottleneck. Grouped-Query Attention (GQA) and Multi-Query Attention (MQA) are architectural modifications that address this by allowing multiple Query heads to share the same Key and Value heads.
Let’s use a concrete example, similar to Llama 2 7B:
d_model
= 4096h
= 32 Q headsd_head
= 128g
= 8 KV head groups for GQA
The key insight is that only the dimensions of the W_K
and W_V
matrices change, which in turn reduces the size of the KV cache. The W_Q
and W_O
matrices remain [4096, 4096]
.
Attention Type | No. of Q Heads | No. of KV Heads | W_K & W_V Dimension | Relative KV Cache Size |
---|---|---|---|---|
MHA (Multi-Head) | 32 | 32 | [4096, 32*128] = [4096, 4096] | 1x (Baseline) |
GQA (Grouped) | 32 | 8 | [4096, 8*128] = [4096, 1024] | 1/4x |
MQA (Multi-Query) | 32 | 1 | [4096, 1*128] = [4096, 128] | 1/32x |
GQA provides a robust balance, significantly reducing the memory and bandwidth requirements for the KV cache with negligible impact on model performance, making it a popular choice in modern LLMs.
5. MHA vs. Mixture of Experts (MoE): A Clarification Link to heading
While both MHA and MoE use the concept of “experts,” they are functionally and architecturally distinct.
- MHA: The “experts” are the attention heads. All heads are active for every token to build a rich representation within the attention layer. This is akin to a board meeting where every member analyzes and contributes to every decision.
- MoE: The “experts” are full Feed-Forward Networks. A routing network selects a small subset of these FFNs for each token. This is a scaling strategy to increase a model’s parameter count for greater capacity while keeping the computational cost fixed. It replaces the standard FFN block, whereas MHA is the attention block.
By understanding these technical details, from the basic concept of a channel to the sophisticated interplay of heads and experts, one can build a more complete and accurate mental model of how LLMs truly operate.
References Link to heading
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
- Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017). Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
- Ainslie, J., Ontanon, J., Cakka, E., Dosovitskiy, A., & Le, Q. V. (2023). GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. arXiv preprint arXiv:2305.13245.