Spaces:
Running
Running
Fix typo
#76
by
aggr
- opened
- src/index.html +34 -34
src/index.html
CHANGED
@@ -73,25 +73,25 @@
|
|
73 |
</d-contents>
|
74 |
|
75 |
<p>
|
76 |
-
Thousands of GPUs humming in perfect harmony. That's what it takes to train today's most powerful AI models – a symphony of computing power that until recently was the exclusive domain of elite research labs. Open source has transformed this landscape, but not completely. Yes, you can download the latest <a href="https://huggingface.co/meta-llama">Llama</a> or <a href="https://huggingface.co/deepseek-ai">DeepSeek</a> models. Yes, you can read their <a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/">technical</a> and <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">experiment</a> reports. But the most challenging part – the training code, the knowledge and
|
77 |
</p>
|
78 |
<aside>Reading time: 2-4 days. For the best reading experience, we recommend not using a mobile phone.</aside>
|
79 |
<p>
|
80 |
-
This open-source book is here to
|
81 |
</p>
|
82 |
|
83 |
-
<p>As the size of the clusters used to train these models grew, various techniques such as data parallelism, tensor parallelism, pipeline parallelism or context parallelism as well as ZeRO or kernel fusion have been invented to makes sure that GPUs are highly utilized at all times. This significantly reduces training time and makes the best use of this expensive hardware. Even more, as the challenge of scaling up AI training goes beyond just building the initial models and teams have found that fine-tuning large models on specialized data often produces the best results, generally involving the same distributed training techniques. In this book we'll progressively go over all of these techniques –from the simplest to the most
|
84 |
|
85 |
<aside>If you have questions or remarks open a discussion on the <a href="https://huggingface.co/spaces/nanotron/ultrascale-playbook/discussions?status=open&type=discussion">Community tab</a>!</aside>
|
86 |
|
87 |
-
<p>We'll
|
88 |
|
89 |
<aside>We are extremely thankful to the whole <a href="https://distill.pub/">distill.pub</a> team for creating
|
90 |
the template on which we based this blog post.</aside>
|
91 |
|
92 |
<p>The book is built on the following <strong>three general foundations</strong>:</p>
|
93 |
|
94 |
-
<p><strong>Quick intros on theory and concepts:</strong> before diving into code and experiments, we want to understand how each method works at a high level and what
|
95 |
<aside>Note that we're still missing Pipeline Parallelism in this widget. To be added as an exercise for the reader.</aside>
|
96 |
|
97 |
<div class="large-image-background-transparent">
|
@@ -268,7 +268,7 @@
|
|
268 |
<ol>
|
269 |
<li><strong>Memory Usage</strong>: it's a hard limitation - if a training step doesn't fit in memory, training cannot proceed</li>
|
270 |
<li><strong>Compute Efficiency</strong>: we want our hardware to spend most time computing, so we need to reduce time spent on data transfers or waiting for other GPUs to perform work.</li>
|
271 |
-
<li><strong>Communication overhead</strong>: we want to minimize communication overhead as it keeps GPUs idle. To
|
272 |
</ol>
|
273 |
<p>In many places we'll see that we can trade one of these (computation, communication, memory) for another (e.g. recomputation or Tensor Parallelism). Finding the right balance is key to scaling training.</p>
|
274 |
<p>
|
@@ -341,7 +341,7 @@
|
|
341 |
|
342 |
<aside>For instance, during DeepSeek-V3/R1 training “the batch size is gradually increased from 3072 input sequences to 15360 in the training of the first 469B tokens, and then keeps at 15360 input samples in the remaining training”.</aside>
|
343 |
|
344 |
-
<p>Batch size also affects the time it takes to train on a given text dataset: a small batch size will require more optimizer steps to train on the same amount of samples. Optimizer steps are costly (in compute time) and the total time to train will thus increase compared to using a larger batch size. This being said, note that the batch size can often be adjusted quite largely around the optimal batch size without major impact
|
345 |
|
346 |
<p>In the LLM pretraining community, batch sizes are commonly reported in terms of tokens rather than in number of samples (<d-math>bst</d-math> = Batch Size Tokens), this makes training numbers generally independent of the exact input sequence length used during the training.</p>
|
347 |
|
@@ -353,7 +353,7 @@
|
|
353 |
|
354 |
<p>From here onward we’ll show the formulas for the batch size in terms of samples but you can always get its token-unit counterpart by multiplying it with the sequence length.</p>
|
355 |
|
356 |
-
<p>A sweet spot for recent LLM training is typically on the order of 4-60 million tokens per batch. The batch size as well as the training corpus have been steadily increasing over the years: Llama 1 was trained with a batch size of ~4M tokens for 1.4
|
357 |
|
358 |
<p><strong>And our first challenge is already coming ahead when scaling the training of our model to these large batch sizes: out-of-memory issues. What should we do when our GPU doesn’t have enough memory to hold a full batch of our target batch size?</strong></p>
|
359 |
|
@@ -361,7 +361,7 @@
|
|
361 |
|
362 |
<h3>Memory usage in Transformers</h3>
|
363 |
|
364 |
-
<p>When training a neural network model, one
|
365 |
|
366 |
<ul>
|
367 |
<li>Model weights</li>
|
@@ -374,7 +374,7 @@
|
|
374 |
<p class="note-box-title">📝 Note</p>
|
375 |
<div class="note-box-content">
|
376 |
<p>
|
377 |
-
You would think for a model you could compute the memory requirements exactly but there are a few additional memory occupants that
|
378 |
<ul>
|
379 |
<li>CUDA Kernels typically require 1-2 GB of GPU memory, which you can quickly verify by running <code>import torch; torch.ones((1, 1)).to("cuda")</code> and then checking the GPU memory with <code>nvidia-smi</code>.</li>
|
380 |
<li>Some rest memory usage from buffers, intermediate results and some memory that can’t be used due to fragmentation</li>
|
@@ -383,13 +383,13 @@
|
|
383 |
</p></div>
|
384 |
</div>
|
385 |
|
386 |
-
<p>These items are stored as tensors which come in different <em>shapes</em> and <em>precisions</em>. The <em>shapes</em> are determined by hyper-parameters such as batch size, sequence length, model hidden dimensions, attention heads, vocabulary size, and potential model sharding as we’ll see later. <em>Precision</em> refers to formats like FP32, BF16, or FP8, which respectively require 4, 2, or 1 byte to store each single value in the tensor. We will have a full discussion of the different precisions and their trade-offs in the <a target="_self" href="#mixed_precision_training">Mixed Precision Training</a> section, for now let's just keep in mind that the memory requirements for these various
|
387 |
|
388 |
-
<p>So how can I quickly determine memory usage from these
|
389 |
|
390 |
<h4>Profiling the memory usage</h4>
|
391 |
|
392 |
-
<p>Using the Pytorch profiler we can understand how memory is allocated
|
393 |
|
394 |
<aside>Check out <a target="_self" href="#a1%3A_distributed_training_profiling" class="">A1: Distributed Training Profiling</a> for a walkthrough how to profile your model.</aside>
|
395 |
|
@@ -434,7 +434,7 @@
|
|
434 |
\end{aligned}
|
435 |
</d-math>
|
436 |
|
437 |
-
<p>Now let’s have look how things change if we use a lower precision. For stability reason (see <a target="_self" href="#mixed_precision_training">the mixed-precision training section below</a>) we often don't use full low precision training but a mix of higher and lower precision called "mixed precision"<d-cite bibtex-key="micikevicius2018mixedprecisiontraining"></d-cite>. The default nowadays for mixed precision training is to generally use BF16 for most of the computations
|
438 |
|
439 |
<aside>See some more details below when we cover the ZeRO methods.</aside>
|
440 |
|
@@ -504,19 +504,19 @@
|
|
504 |
|
505 |
<p>As we can see, as soon as we reach <strong>7B</strong> (!), weights and optimizer requirements already starts to add up significantly and exceed the size of a typical GPU memory, e.g. 80GB for a H100 GPU.</p>
|
506 |
|
507 |
-
<p>But for now, let’s start with models which still
|
508 |
|
509 |
<h4>Activations memory</h4>
|
510 |
|
511 |
<p>Activation memory is a bit more complex to compute than the weights, gradients and optimizer states, in part because it depends on the inputs of the model. If you’re unsure why we even need to store activations for the backward pass, <a href="https://www.determined.ai/blog/act-mem-2">this reference</a> is a good quick refresh. After a careful inspection of how backward pass is computed we can estimate the total memory required for the activations in mixed precision and we arrive at the following equation:</p>
|
512 |
|
513 |
<d-math block>
|
514 |
-
m_{act} = L \cdot seq \cdot bs \cdot h \cdot (34 + \frac{5 \cdot n_{heads} \cdot seq}{h})</p>
|
515 |
</d-math>
|
516 |
|
517 |
<p>Here <d-math>L</d-math> is the number of layers, <d-math>seq</d-math> the sequence length, <d-math>bs</d-math> the batch size in samples, <d-math>h</d-math> the hidden dimension of the model and <d-math>n_{heads}</d-math> the number of heads.</p>
|
518 |
|
519 |
-
<p>For the exact derivation of the numbers, you can follow this original NVIDIA paper on recomputation <d-cite bibtex-key="korthikanti2022recomputation"></d-cite
|
520 |
|
521 |
<p>An interesting observation here is how the memory is not static for a given model but it scales linearly with both the sequence length and batch size. This means the activation memory is the part which will blow up when we increase our batch size or train with longer sequences. We can use this equation to look at how memory usage changes for various sequence lengths for example for Llama models (<code>bs=1</code>):</p>
|
522 |
|
@@ -535,7 +535,7 @@
|
|
535 |
|
536 |
<p>Is there a way to tame this “activation explosion”? Good question, reader!</p>
|
537 |
|
538 |
-
<p>It’s time to explain our first technique
|
539 |
|
540 |
<h3>Activation recomputation</h3>
|
541 |
|
@@ -583,7 +583,7 @@
|
|
583 |
|
584 |
<p>Now that we’ve learned about recomputation, we can tame the activations memory usage as we saw in the above graphs!</p>
|
585 |
|
586 |
-
<p>However, activations still
|
587 |
|
588 |
<h3>Gradient accumulation</h3>
|
589 |
|
@@ -611,7 +611,7 @@
|
|
611 |
|
612 |
<p>But if you’ve carefully followed, you probably noticed that the forward/backward passes for each micro-batch can actually be run in parallel. Forward/backward passes are independent from each other, with independent input samples being the only difference. Seems like it’s time to start extending our training to more than one GPU! </p>
|
613 |
|
614 |
-
<p>Before that, let's quickly see how we can
|
615 |
|
616 |
<h4>Profiling GPU compute and communication</h4>
|
617 |
|
@@ -802,7 +802,7 @@
|
|
802 |
|
803 |
<p>While data parallelism nicely overlaps the all-reduce gradient synchronization with backward computation to save time, this benefit starts to break down at large scales. Why? Because as we add more and more GPUs (hundreds or thousands), the overhead of coordinating between them grows significantly and the network requirements are becoming too large for the benefits. As a result, our setup will become less and less efficient which each additional GPU we add to the system.</p>
|
804 |
|
805 |
-
<p>
|
806 |
|
807 |
<!-- <p><img alt="image.png" src="/assets/images/dp_scaling.svg"/></p> -->
|
808 |
<div class="l-body-outset" id="fragment-dp_scaling"></div>
|
@@ -864,7 +864,7 @@
|
|
864 |
|
865 |
<h4>Memory usage revisited</h4>
|
866 |
|
867 |
-
<p>You likely remember from <a target="_self" href="#memory_usage_in_transformers"> our previous section</a> the memory usage of optimizer states, gradients, and parameters during a standard training.
|
868 |
|
869 |
<ul>
|
870 |
<li>Model’s parameters (half precision i.e. bf16/fp16): <d-math>2\Psi</d-math></li>
|
@@ -902,7 +902,7 @@
|
|
902 |
</ul>
|
903 |
<aside>Note: reduce-scatter is 2 times faster than all reduce! <em>Yay, a third communication primitive!</em></aside>
|
904 |
|
905 |
-
<p>You may be wondering what is this "reduce-scatter" operation and how this all look so
|
906 |
|
907 |
<p><img alt="dp_zero1.gif" src="/assets/images/dp_zero1.gif" /></p>
|
908 |
|
@@ -1354,9 +1354,9 @@
|
|
1354 |
|
1355 |
<!-- <p><img alt="image.png" src="/assets/images/cp_memoryusage.svg" /></p> -->
|
1356 |
|
1357 |
-
<p>The core idea of Context
|
1358 |
|
1359 |
-
<p>For Context Parallelism; just like Sequence Parallelism, we’ll split the input along the sequence dimension but we now apply this splitting along the full model, instead of only the sequence parallel regions of the model as we’ve done
|
1360 |
|
1361 |
<!-- <p><img alt="cp_8Bmemoryusage.svg" src="/assets/images/cp_8Bmemoryusage.svg" /></p>
|
1362 |
-->
|
@@ -1364,7 +1364,7 @@
|
|
1364 |
|
1365 |
<p>There is one important exception though as we we need to pay particular attention to the <strong>Attention blocks</strong> (haha.. pun intended :D). In the attention module each token needs to access key/value pairs from <strong>all</strong> other sequence tokens or in the case of causal attention at least attends to each previous token.</p>
|
1366 |
|
1367 |
-
<p>Because Context Parallelism splits the inputs along the sequence dimension across GPUs, the attention module will
|
1368 |
|
1369 |
<p>That sounds very expensive if we do it naively. Is there a way to do this rather efficiently and fast! Thankfully there is: a core technique to handle this communication of key/value pairs efficiently is called <em>Ring Attention</em>.</p>
|
1370 |
|
@@ -1504,7 +1504,7 @@
|
|
1504 |
</div>
|
1505 |
<p>The remaining idle time is indicated in grey and usually called the “bubble” and the sight of this probably break your heart after we spent so much time optimizing throughput.</p>
|
1506 |
|
1507 |
-
<p>We can quantify how efficient a pipeline setup is by looking at how much time we
|
1508 |
|
1509 |
<p>We can compute the ratio of the additional bubble time over the ideal time:
|
1510 |
</p>
|
@@ -1592,7 +1592,7 @@
|
|
1592 |
|
1593 |
<p><img alt="pp_1f1b_interleaved.svg" src="/assets/images/pp_1f1b_interleaved.svg" /></p>
|
1594 |
|
1595 |
-
<div class="figure-legend"><p>An example of interleaved pipeline parallelism for a model with layers distributed across 4 GPUs. Numbers still correspond to the microbatches IDs but for clarity we've colored differently the first and the last layers of the model to illustrate how layers are spread
|
1596 |
</div>
|
1597 |
|
1598 |
<p>As a consequence we see additional communications happening as the model goes several times through each GPU for the same computation that previously just took one pass. However, each forward and backward pass is divided by a factor of <d-math>v</d-math>, where <d-math>v</d-math> is the number of stages or model chunks per GPUs as we are able to better interleave forward and backward passes. </p>
|
@@ -2016,7 +2016,7 @@
|
|
2016 |
|
2017 |
<h3>Lessons learned on benchmarking</h3>
|
2018 |
|
2019 |
-
<p>Our goal for this book was not only to discuss theory and implementations but provide actual data points as well. So the plan was simple:
|
2020 |
|
2021 |
<p>
|
2022 |
On paper this sounds easy enough: we can easily launch big arrays of jobs on our cluster. However, as soon as we launched the first batches of experiments, troubles began:
|
@@ -2316,7 +2316,7 @@
|
|
2316 |
<p><img alt="image.png" src="/assets/images/memorycoalescing4.png" /></p>
|
2317 |
|
2318 |
|
2319 |
-
<p>To improve the
|
2320 |
|
2321 |
<d-code block language="clike">
|
2322 |
const int x = blockIdx.x * BLOCKSIZE + (threadIdx.x / BLOCKSIZE);
|
@@ -2490,7 +2490,7 @@
|
|
2490 |
|
2491 |
<h3>Mixed Precision Training</h3>
|
2492 |
|
2493 |
-
<p>In various sections
|
2494 |
|
2495 |
<p>Mixed Precision Training, as the name suggests, involves mixing different precisions when training. The default numerical precision of PyTorch tensors is single-precision floating point format or also called FP32 or float32 which means that every number stored takes up 32 bits or 4 bytes. The available bits to represent a number are divided into 3 parts:</p>
|
2496 |
|
@@ -2564,15 +2564,15 @@
|
|
2564 |
<p><img alt="image.png" src="/assets/images/mixedprecision.png" /></p>
|
2565 |
|
2566 |
|
2567 |
-
<p>We can see that float32 spans 80 orders of magnitude and float16 sacrifices a lot of range while bfloat16 maintains the full range. The two float8 formats reduce the range even further where e5e2 can maintain float16 range and e4m3 has an even smaller
|
2568 |
|
2569 |
-
<p>How come some
|
2570 |
|
2571 |
<p><img alt="image.png" src="/assets/images/mixedprecision_2.png" /></p>
|
2572 |
|
2573 |
<p>We can see here that bfloat16 maintained the range of float32 over float16 but did this with the cost of sacrificing more precision. In case of float8 the situation is even more dire as e4m3 can represent 7 and e5m2 only 3 number on the interval 1-2.</p>
|
2574 |
|
2575 |
-
<p>A common metric to measure a
|
2576 |
|
2577 |
<p>The idea of mixed precision training is to use some of these lower precisions formats while maintaining the performance of full precision training. </p>
|
2578 |
|
|
|
73 |
</d-contents>
|
74 |
|
75 |
<p>
|
76 |
+
Thousands of GPUs humming in perfect harmony. That's what it takes to train today's most powerful AI models – a symphony of computing power that until recently was the exclusive domain of elite research labs. Open source has transformed this landscape, but not completely. Yes, you can download the latest <a href="https://huggingface.co/meta-llama">Llama</a> or <a href="https://huggingface.co/deepseek-ai">DeepSeek</a> models. Yes, you can read their <a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/">technical</a> and <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">experiment</a> reports. But the most challenging part – the training code, the knowledge and techniques necessary to coordinate GPUs to train these massive systems – remains shrouded in complexity and spread around a series of disconnected papers and often private codebases.
|
77 |
</p>
|
78 |
<aside>Reading time: 2-4 days. For the best reading experience, we recommend not using a mobile phone.</aside>
|
79 |
<p>
|
80 |
+
This open-source book is here to change that. Starting from the basics, we'll walk you through the knowledge necessary to scale the training of large language models from one GPU to tens, hundreds and even thousands of GPUs, illustrating theory with practical code examples and reproducible benchmarks.
|
81 |
</p>
|
82 |
|
83 |
+
<p>As the size of the clusters used to train these models grew, various techniques such as data parallelism, tensor parallelism, pipeline parallelism or context parallelism as well as ZeRO or kernel fusion have been invented to makes sure that GPUs are highly utilized at all times. This significantly reduces training time and makes the best use of this expensive hardware. Even more, as the challenge of scaling up AI training goes beyond just building the initial models and teams have found that fine-tuning large models on specialized data often produces the best results, generally involving the same distributed training techniques. In this book we'll progressively go over all of these techniques –from the simplest to the most refined one– while keeping a single story-line to understand where each method comes from.</p>
|
84 |
|
85 |
<aside>If you have questions or remarks open a discussion on the <a href="https://huggingface.co/spaces/nanotron/ultrascale-playbook/discussions?status=open&type=discussion">Community tab</a>!</aside>
|
86 |
|
87 |
+
<p>We'll assume you have some simple basic knowledge about current LLM architecture and are roughtly familiar with how deep learning models are trained, but you can be generally new to distributed training. If needed, the basics of model training can be found in great courses found at <a href="https://www.deeplearning.ai">DeepLearning.ai</a> or on the <a href="https://pytorch.org/tutorials/beginner/basics/intro.html">PyTorch tutorial sections</a>. This book can be seen as the second part of a trilogy following our first blog on processing data for pre-training, the so-called “<a href="https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1">FineWeb blog post</a>”. Having read both blog posts, you should have almost all the core knowledge needed to fully understand how high-performing LLMs are being built nowadays, just missing some final spices regarding data mixing and architecture choices to complete the recipe (stay tuned for part three…).</p>
|
88 |
|
89 |
<aside>We are extremely thankful to the whole <a href="https://distill.pub/">distill.pub</a> team for creating
|
90 |
the template on which we based this blog post.</aside>
|
91 |
|
92 |
<p>The book is built on the following <strong>three general foundations</strong>:</p>
|
93 |
|
94 |
+
<p><strong>Quick intros on theory and concepts:</strong> before diving into code and experiments, we want to understand how each method works at a high level and what its advantages and limits are. You’ll learn about which parts of a language model eat away your memory and when during training it happens. You’ll learn how we can solve memory constraints by parallelizing the models and increase the throughput by scaling up GPUs. As a result you'll understand how the following widget to compute the memory breakdown of a transformer model works: </p>
|
95 |
<aside>Note that we're still missing Pipeline Parallelism in this widget. To be added as an exercise for the reader.</aside>
|
96 |
|
97 |
<div class="large-image-background-transparent">
|
|
|
268 |
<ol>
|
269 |
<li><strong>Memory Usage</strong>: it's a hard limitation - if a training step doesn't fit in memory, training cannot proceed</li>
|
270 |
<li><strong>Compute Efficiency</strong>: we want our hardware to spend most time computing, so we need to reduce time spent on data transfers or waiting for other GPUs to perform work.</li>
|
271 |
+
<li><strong>Communication overhead</strong>: we want to minimize communication overhead as it keeps GPUs idle. To achieve this we will try to make best use of intra-node (fast) and inter-node (slower) bandwidths as well as overlap communication with compute as much as possible.</li>
|
272 |
</ol>
|
273 |
<p>In many places we'll see that we can trade one of these (computation, communication, memory) for another (e.g. recomputation or Tensor Parallelism). Finding the right balance is key to scaling training.</p>
|
274 |
<p>
|
|
|
341 |
|
342 |
<aside>For instance, during DeepSeek-V3/R1 training “the batch size is gradually increased from 3072 input sequences to 15360 in the training of the first 469B tokens, and then keeps at 15360 input samples in the remaining training”.</aside>
|
343 |
|
344 |
+
<p>Batch size also affects the time it takes to train on a given text dataset: a small batch size will require more optimizer steps to train on the same amount of samples. Optimizer steps are costly (in compute time) and the total time to train will thus increase compared to using a larger batch size. This being said, note that the batch size can often be adjusted quite largely around the optimal batch size without major impact on the performance of the model, i.e. the sensitivity of final model performances to the exact batch size value is usually rather low around the optimal batch size.</p>
|
345 |
|
346 |
<p>In the LLM pretraining community, batch sizes are commonly reported in terms of tokens rather than in number of samples (<d-math>bst</d-math> = Batch Size Tokens), this makes training numbers generally independent of the exact input sequence length used during the training.</p>
|
347 |
|
|
|
353 |
|
354 |
<p>From here onward we’ll show the formulas for the batch size in terms of samples but you can always get its token-unit counterpart by multiplying it with the sequence length.</p>
|
355 |
|
356 |
+
<p>A sweet spot for recent LLM training is typically on the order of 4-60 million tokens per batch. The batch size as well as the training corpus have been steadily increasing over the years: Llama 1 was trained with a batch size of ~4M tokens for 1.4 trillion tokens while DeepSeek was trained with a batch size of ~60M tokens for 14 trillion tokens.</p>
|
357 |
|
358 |
<p><strong>And our first challenge is already coming ahead when scaling the training of our model to these large batch sizes: out-of-memory issues. What should we do when our GPU doesn’t have enough memory to hold a full batch of our target batch size?</strong></p>
|
359 |
|
|
|
361 |
|
362 |
<h3>Memory usage in Transformers</h3>
|
363 |
|
364 |
+
<p>When training a neural network model, one stores several items in memory:</p>
|
365 |
|
366 |
<ul>
|
367 |
<li>Model weights</li>
|
|
|
374 |
<p class="note-box-title">📝 Note</p>
|
375 |
<div class="note-box-content">
|
376 |
<p>
|
377 |
+
You would think for a model you could compute the memory requirements exactly but there are a few additional memory occupants that make it hard to be exact:
|
378 |
<ul>
|
379 |
<li>CUDA Kernels typically require 1-2 GB of GPU memory, which you can quickly verify by running <code>import torch; torch.ones((1, 1)).to("cuda")</code> and then checking the GPU memory with <code>nvidia-smi</code>.</li>
|
380 |
<li>Some rest memory usage from buffers, intermediate results and some memory that can’t be used due to fragmentation</li>
|
|
|
383 |
</p></div>
|
384 |
</div>
|
385 |
|
386 |
+
<p>These items are stored as tensors which come in different <em>shapes</em> and <em>precisions</em>. The <em>shapes</em> are determined by hyper-parameters such as batch size, sequence length, model hidden dimensions, attention heads, vocabulary size, and potential model sharding as we’ll see later. <em>Precision</em> refers to formats like FP32, BF16, or FP8, which respectively require 4, 2, or 1 byte to store each single value in the tensor. We will have a full discussion of the different precisions and their trade-offs in the <a target="_self" href="#mixed_precision_training">Mixed Precision Training</a> section, for now let's just keep in mind that the memory requirements for these various formats will be different and that will impact the memory usage of the items we need to store.</p>
|
387 |
|
388 |
+
<p>So how can I quickly determine memory usage from these variables? One simple way is to do this empirically and just measure it.</p>
|
389 |
|
390 |
<h4>Profiling the memory usage</h4>
|
391 |
|
392 |
+
<p>Using the Pytorch profiler we can understand how memory is allocated throughout training. We can see that memory utilization is not a static thing but varies a lot during training and during a training step:</p>
|
393 |
|
394 |
<aside>Check out <a target="_self" href="#a1%3A_distributed_training_profiling" class="">A1: Distributed Training Profiling</a> for a walkthrough how to profile your model.</aside>
|
395 |
|
|
|
434 |
\end{aligned}
|
435 |
</d-math>
|
436 |
|
437 |
+
<p>Now let’s have look how things change if we use a lower precision. For stability reason (see <a target="_self" href="#mixed_precision_training">the mixed-precision training section below</a>) we often don't use full low precision training but a mix of higher and lower precision called "mixed precision"<d-cite bibtex-key="micikevicius2018mixedprecisiontraining"></d-cite>. The default nowadays for mixed precision training is to generally use BF16 for most of the computations—requiring 2 bytes per parameter and gradient—as well as an additional copy of the model weights and gradients in FP32, thus 12 bytes per parameter in total. In addition to the parameters and gradient, we need to store the optimizer states: for the Adam optimizer, this requires the momentum and the variance usually stored in FP32 for numerical stability, each using 4 bytes. </p>
|
438 |
|
439 |
<aside>See some more details below when we cover the ZeRO methods.</aside>
|
440 |
|
|
|
504 |
|
505 |
<p>As we can see, as soon as we reach <strong>7B</strong> (!), weights and optimizer requirements already starts to add up significantly and exceed the size of a typical GPU memory, e.g. 80GB for a H100 GPU.</p>
|
506 |
|
507 |
+
<p>But for now, let’s start with models which still fit in a single GPU, and take a look at the last big contributor to our memory budget: the activation memory.</p>
|
508 |
|
509 |
<h4>Activations memory</h4>
|
510 |
|
511 |
<p>Activation memory is a bit more complex to compute than the weights, gradients and optimizer states, in part because it depends on the inputs of the model. If you’re unsure why we even need to store activations for the backward pass, <a href="https://www.determined.ai/blog/act-mem-2">this reference</a> is a good quick refresh. After a careful inspection of how backward pass is computed we can estimate the total memory required for the activations in mixed precision and we arrive at the following equation:</p>
|
512 |
|
513 |
<d-math block>
|
514 |
+
m_{act} = L \cdot seq \cdot bs \cdot h \cdot \left(34 + \frac{5 \cdot n_{heads} \cdot seq}{h}\right)</p>
|
515 |
</d-math>
|
516 |
|
517 |
<p>Here <d-math>L</d-math> is the number of layers, <d-math>seq</d-math> the sequence length, <d-math>bs</d-math> the batch size in samples, <d-math>h</d-math> the hidden dimension of the model and <d-math>n_{heads}</d-math> the number of heads.</p>
|
518 |
|
519 |
+
<p>For the exact derivation of the numbers, you can follow this original NVIDIA paper on recomputation <d-cite bibtex-key="korthikanti2022recomputation"></d-cite>; it essentially requires you to do some accounting of all the sizes of intermediate activations between each operation in a transformer layer.</p>
|
520 |
|
521 |
<p>An interesting observation here is how the memory is not static for a given model but it scales linearly with both the sequence length and batch size. This means the activation memory is the part which will blow up when we increase our batch size or train with longer sequences. We can use this equation to look at how memory usage changes for various sequence lengths for example for Llama models (<code>bs=1</code>):</p>
|
522 |
|
|
|
535 |
|
536 |
<p>Is there a way to tame this “activation explosion”? Good question, reader!</p>
|
537 |
|
538 |
+
<p>It’s time to explain our first technique—called <strong><em>activation recomputation</em></strong>—which will help us cap activation memory footprint; an essential tool in today’s large model training toolbox.</p>
|
539 |
|
540 |
<h3>Activation recomputation</h3>
|
541 |
|
|
|
583 |
|
584 |
<p>Now that we’ve learned about recomputation, we can tame the activations memory usage as we saw in the above graphs!</p>
|
585 |
|
586 |
+
<p>However, activations still have a linear dependence on the batch size and all our profiles in the barplots above were using <code>bs=1</code>, so as we move to larger batch sizes it might become an issue again. Do not despair as we have a second tool in our box—<strong><em>gradient accumulation</em></strong> to the rescue!</p>
|
587 |
|
588 |
<h3>Gradient accumulation</h3>
|
589 |
|
|
|
611 |
|
612 |
<p>But if you’ve carefully followed, you probably noticed that the forward/backward passes for each micro-batch can actually be run in parallel. Forward/backward passes are independent from each other, with independent input samples being the only difference. Seems like it’s time to start extending our training to more than one GPU! </p>
|
613 |
|
614 |
+
<p>Before that, let's quickly see how we can visualise computation and communication with a short tour of one of the most useful tool in the distributed training toolbox: the <strong>profiler</strong>. This tool will be extremely useful to understand and validate how communications between GPUs and compute are happening and where bottlenecks are.</p>
|
615 |
|
616 |
<h4>Profiling GPU compute and communication</h4>
|
617 |
|
|
|
802 |
|
803 |
<p>While data parallelism nicely overlaps the all-reduce gradient synchronization with backward computation to save time, this benefit starts to break down at large scales. Why? Because as we add more and more GPUs (hundreds or thousands), the overhead of coordinating between them grows significantly and the network requirements are becoming too large for the benefits. As a result, our setup will become less and less efficient which each additional GPU we add to the system.</p>
|
804 |
|
805 |
+
<p>Let's see this happening in practice with some benchmark:</p>
|
806 |
|
807 |
<!-- <p><img alt="image.png" src="/assets/images/dp_scaling.svg"/></p> -->
|
808 |
<div class="l-body-outset" id="fragment-dp_scaling"></div>
|
|
|
864 |
|
865 |
<h4>Memory usage revisited</h4>
|
866 |
|
867 |
+
<p>You likely remember from <a target="_self" href="#memory_usage_in_transformers"> our previous section</a> the memory usage of optimizer states, gradients, and parameters during a standard training. Let's call our model's parameters count <d-math>\Psi</d-math> (previously N but here we use the original ZeRO paper notation). In <a target="_self" href="#mixed_precision_training">Mixed Precision Training</a> (more details in a later section) with the Adam optimizer, the memory usage for each item we need to store is:</p>
|
868 |
|
869 |
<ul>
|
870 |
<li>Model’s parameters (half precision i.e. bf16/fp16): <d-math>2\Psi</d-math></li>
|
|
|
902 |
</ul>
|
903 |
<aside>Note: reduce-scatter is 2 times faster than all reduce! <em>Yay, a third communication primitive!</em></aside>
|
904 |
|
905 |
+
<p>You may be wondering what is this "reduce-scatter" operation and how this all look so let's try to make this more graphical with the figure below. We'll go over all the steps of a forward/backward pass cycle:</p>
|
906 |
|
907 |
<p><img alt="dp_zero1.gif" src="/assets/images/dp_zero1.gif" /></p>
|
908 |
|
|
|
1354 |
|
1355 |
<!-- <p><img alt="image.png" src="/assets/images/cp_memoryusage.svg" /></p> -->
|
1356 |
|
1357 |
+
<p>The core idea of Context Parallelism is to apply a similar idea to the Sequence Parallelism approach (aka to split along the sequence length) but to the modules where we already apply Tensor Parallelism. We will thus split these modules along two dimensions, thereby also reducing the effect of sequence length. You will find this approach quite intuitive after all we’ve already covered but... there is a trick to it so stay awake!</p>
|
1358 |
|
1359 |
+
<p>For Context Parallelism; just like Sequence Parallelism, we’ll split the input along the sequence dimension but we now apply this splitting along the full model, instead of only the sequence parallel regions of the model as we’ve done previously with Tensor + Sequence Parallelism.</p>
|
1360 |
|
1361 |
<!-- <p><img alt="cp_8Bmemoryusage.svg" src="/assets/images/cp_8Bmemoryusage.svg" /></p>
|
1362 |
-->
|
|
|
1364 |
|
1365 |
<p>There is one important exception though as we we need to pay particular attention to the <strong>Attention blocks</strong> (haha.. pun intended :D). In the attention module each token needs to access key/value pairs from <strong>all</strong> other sequence tokens or in the case of causal attention at least attends to each previous token.</p>
|
1366 |
|
1367 |
+
<p>Because Context Parallelism splits the inputs along the sequence dimension across GPUs, the attention module will require full communication between GPUs to exchange the necessary key/value data.</p>
|
1368 |
|
1369 |
<p>That sounds very expensive if we do it naively. Is there a way to do this rather efficiently and fast! Thankfully there is: a core technique to handle this communication of key/value pairs efficiently is called <em>Ring Attention</em>.</p>
|
1370 |
|
|
|
1504 |
</div>
|
1505 |
<p>The remaining idle time is indicated in grey and usually called the “bubble” and the sight of this probably break your heart after we spent so much time optimizing throughput.</p>
|
1506 |
|
1507 |
+
<p>We can quantify how efficient a pipeline setup is by looking at how much time we lose because of the bubble. Let’s say <d-math>t_f</d-math> and <d-math>t_b</d-math> are the times for the forward and backward pass, respectively, as measured for one microbatch and one stage of the pipeline (a simple assumption is often to have <d-math>t_b \approx 2 \times t_f</d-math> which you can see on the above graph). If we could perfectly parallelize the ideal total time would be <d-math>t_{id}=t_f + t_b</d-math>. However, we can count on the graph that due to the pipeline bubble there is additional time of <d-math>t_{pb}=(p-1)*(t_f+t_b)</d-math> (where <d-math>p</d-math> is the degree of pipeline parallelism, i.e the number of GPU on the above graph) ie. the time each GPU is waiting while other GPUs are computing.</p>
|
1508 |
|
1509 |
<p>We can compute the ratio of the additional bubble time over the ideal time:
|
1510 |
</p>
|
|
|
1592 |
|
1593 |
<p><img alt="pp_1f1b_interleaved.svg" src="/assets/images/pp_1f1b_interleaved.svg" /></p>
|
1594 |
|
1595 |
+
<div class="figure-legend"><p>An example of interleaved pipeline parallelism for a model with layers distributed across 4 GPUs. Numbers still correspond to the microbatches IDs but for clarity we've colored differently the first and the last layers of the model to illustrate how layers are spread across GPUs.</p>
|
1596 |
</div>
|
1597 |
|
1598 |
<p>As a consequence we see additional communications happening as the model goes several times through each GPU for the same computation that previously just took one pass. However, each forward and backward pass is divided by a factor of <d-math>v</d-math>, where <d-math>v</d-math> is the number of stages or model chunks per GPUs as we are able to better interleave forward and backward passes. </p>
|
|
|
2016 |
|
2017 |
<h3>Lessons learned on benchmarking</h3>
|
2018 |
|
2019 |
+
<p>Our goal for this book was not only to discuss theory and implementations but provide actual data points as well. So the plan was simple: let's run every possible distributed configuration for every model and a number of cluster sizes (namely 1-64 nodes of 8xH100s). Even after excluding impossible configuration we still needed to run thousands of experiments. </p>
|
2020 |
|
2021 |
<p>
|
2022 |
On paper this sounds easy enough: we can easily launch big arrays of jobs on our cluster. However, as soon as we launched the first batches of experiments, troubles began:
|
|
|
2316 |
<p><img alt="image.png" src="/assets/images/memorycoalescing4.png" /></p>
|
2317 |
|
2318 |
|
2319 |
+
<p>To improve the performance of our kernel we can change the way coordinates <code></code>x</code> and <code>y</code> are calculated to the following: </p>
|
2320 |
|
2321 |
<d-code block language="clike">
|
2322 |
const int x = blockIdx.x * BLOCKSIZE + (threadIdx.x / BLOCKSIZE);
|
|
|
2490 |
|
2491 |
<h3>Mixed Precision Training</h3>
|
2492 |
|
2493 |
+
<p>In various sections among this book, we've talked about lower precisions formats and their impact on the memory requirements for storing activations, parameters and optimizer states. It's now time to dive deeper in the details of these formats and understand better their trade-offs, advantages and limitations.</p>
|
2494 |
|
2495 |
<p>Mixed Precision Training, as the name suggests, involves mixing different precisions when training. The default numerical precision of PyTorch tensors is single-precision floating point format or also called FP32 or float32 which means that every number stored takes up 32 bits or 4 bytes. The available bits to represent a number are divided into 3 parts:</p>
|
2496 |
|
|
|
2564 |
<p><img alt="image.png" src="/assets/images/mixedprecision.png" /></p>
|
2565 |
|
2566 |
|
2567 |
+
<p>We can see that float32 spans 80 orders of magnitude and float16 sacrifices a lot of range while bfloat16 maintains the full range. The two float8 formats reduce the range even further where e5e2 can maintain float16 range and e4m3 has an even smaller range.</p>
|
2568 |
|
2569 |
+
<p>How come some formats are able to maintain the range, and not others? Let’s investigate the resolution by plotting 10,000 points between 1 and 2. Each point will be rounded to the nearest representable number in each format:</p>
|
2570 |
|
2571 |
<p><img alt="image.png" src="/assets/images/mixedprecision_2.png" /></p>
|
2572 |
|
2573 |
<p>We can see here that bfloat16 maintained the range of float32 over float16 but did this with the cost of sacrificing more precision. In case of float8 the situation is even more dire as e4m3 can represent 7 and e5m2 only 3 number on the interval 1-2.</p>
|
2574 |
|
2575 |
+
<p>A common metric to measure a format's resolution is epsilon: the first representable number after <d-math>1.00</d-math>. We can see that for the float32 format <d-math>10^{-4}</d-math> is an upper bound (it’s actually <d-math>1.19^{-7}</d-math>). For float16 it is <d-math>\tilde 10^{-3}</d-math> and for bfloat 10x higher still.</p>
|
2576 |
|
2577 |
<p>The idea of mixed precision training is to use some of these lower precisions formats while maintaining the performance of full precision training. </p>
|
2578 |
|