The optimum.neuron.distributed module provides a set of tools to perform distributed training and inference.
Each model that supports parallelization in optimum-neuron has its own derived Parallelizer class. The factory class ParallelizersManager allows you to retrieve such model-specific Parallelizers easily.
Provides the list of supported model types for parallelization.
( model_type_or_model: typing.Union[str, transformers.modeling_utils.PreTrainedModel, optimum.neuron.distributed.utils.NeuronPeftModel] )
Returns a tuple of 3 booleans where:
( model_type_or_model: typing.Union[str, transformers.modeling_utils.PreTrainedModel, optimum.neuron.distributed.utils.NeuronPeftModel] )
Returns the parallelizer class associated to the model.
Distributed training / inference is usually needed when the model is too big to fit in one device. Tools that allow for lazy loading of optimizer states are thus needed to avoid going out-of-memory before parallelization.
( optimizer_cls: typing.Type[ForwardRef('torch.optim.Optimizer')] )
Transforms an optimizer constructor (optimizer class) to make it lazy by not initializing the parameters. This makes the optimizer lightweight and usable to create a “real” optimizer once the model has been parallelized.