Papers
arxiv:2506.15461

All is Not Lost: LLM Recovery without Checkpoints

Published on Jun 18
· Submitted by benfielding on Jun 19
#3 Paper of the day
Authors:

Abstract

A novel method, CheckFree, and its extended version CheckFree+, efficiently recover from node failures during LLM training by substituting failed stages with averaged neighboring stages or through out-of-order pipeline execution, improving convergence time over existing checkpointing methods.

AI-generated summary

Training LLMs on decentralized and wimpy computation nodes, e.g., multiple on-spot instances, lowers the training cost and enables model democratization. The inevitable challenge here is the churn of nodes due to failures and the operator's scheduling policies, leading to losing a stage - a part of the model. The conventional approaches to recover from failures are to either use checkpointing, where periodically a copy of the entire model is sent to an additional storage, or redundant computation. These approaches yield significant communication and/or computation overhead even in non-failure cases and scale poorly in settings with large models. In this paper, we propose, CheckFree, an efficient recovery method where a failing stage is substituted by a weighted average of the closest neighboring stages. In contrast to the state of the art, CheckFree requires no additional computation or storage. However, because of the nature of averaging neighbouring stages, it can only recover failures of intermediate stages. We further extend our method to CheckFree+ with out-of-order pipeline execution to tolerate crashes of the first and last stages. Thanks to out-of-order pipelining, behaviour of those stages is mimicked by their neighboring ones, which allows CheckFree+ to recover them by simply copying the weights from the immediate neighbour. To be able to recover the (de)embedding layers, CheckFree+ copies those layers to the neighboring stages, which requires relatively small storage overhead. We extensively evaluate our method on LLaMa models of model sizes from 124M to 1.5B with varying failure frequencies. In the case of low and medium failure rates (5-10%), CheckFree and CheckFree+ outperform both checkpointing and redundant computation in terms of convergence in wall-clock time by over 12%. Both of our proposals can be run via our code available at: https://github.com/gensyn-ai/CheckFree.

Community

CheckFree is a fault tolerant method for decentralised training, with no checkpoints or redundant compute.

Up to 1.6x faster than existing methods, with no convergence loss.

Why it matters
Fault tolerance is critical in decentralised training, as nodes are unreliable and prone to failure. Recent works have proposed various recovery methods, though they still require redundant computation or checkpointing, adding time and compute.

How it works
CheckFree instead recovers the failed stage with the average weights of its neighbouring stages. This provides an efficient way to approximate the lost weights, with minimal effect on convergence.

This unlocks:
– Up to 1.6x faster training time than conventional checkpointing
– Up to 1.2x faster than using redundant compute
– No additional memory or compute required

You can read more about it in our article, the arXiv paper, and re-run our experiments with the open source code.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.15461 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.15461 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.15461 in a Space README.md to link it from this page.

Collections including this paper 1