Optimization

Optimum Intel can be used to apply popular compression techniques such as quantization, pruning and knowledge distillation.

Post-training optimization

Post-training compression techniques such as dynamic and static quantization can be easily applied on your model using our IncQuantizer. Note that quantization is currently only supported for CPUs (only CPU backends are available), so we will not be utilizing GPUs / CUDA in the following examples. To apply dynamic quantization on a fine-tuned DistilBERT, we first need to create the corresponding configuration describing the quantization details as well as the quantizer object used to later apply quantization:

from transformers import AutoModelForQuestionAnswering
from neural_compressor.config import PostTrainingQuantConfig
from optimum.intel.neural_compressor import INCQuantizer


model_name = "distilbert-base-cased-distilled-squad"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
# The directory where the quantized model will be saved
save_dir = "quantized_model"
# Load the quantization configuration detailing the quantization we wish to apply
quantization_config = PostTrainingQuantConfig(approach="dynamic")
quantizer = INCQuantizer.from_pretrained(model)
# Apply dynamic quantization and save the resulting model
quantizer.quantize(quantization_config=quantization_config, save_directory=save_dir)

While training optimization

The INCTrainer class provides an API to train your model while combining different compression techniques such as knowledge distillation, pruning and quantization. The INCTrainer is very similar to the 🤗 Transformers Trainer, which can be replaced with minimal changes in your code.

from transformers import TrainingArguments, default_data_collator
-from transformers import Trainer
+from optimum.intel.neural_compressor import INCTrainer
+from neural_compressor import QuantizationAwareTrainingConfig

# Load the quantization configuration detailing the quantization we wish to apply
+quantization_config = QuantizationAwareTrainingConfig()

# The directory where the quantized model will be saved
output_dir = "quantized_model"

-trainer = Trainer(
+trainer = INCTrainer(
    model=model,
+   quantization_config=quantization_config,
    args=TrainingArguments(output_dir, num_train_epochs=3.0),
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    compute_metrics=compute_metrics,
    tokenizer=tokenizer,
    data_collator=default_data_collator,
)

# Save the PyTorch checkpoint
+trainer.save_model()

For pruning, we support snip_momentum(default), snip_momentum_progressive, magnitude, magnitude_progressive, gradient, gradient_progressive, snip, snip_progressive and pattern_lock. You can refer to the pruning details.

Note: At present, neural_compressor only support to prune linear and conv ops. So if we set a target sparsity is 0.9, it means that the pruning op’s sparsity will be 0.9, not the whole model’s sparsity is 0.9. For example: the embedding ops will not be pruned in the model.

For distillation, we support knowledge distillation, intermediate layer knowledge distillation and self distillation. You can refer to the distillation details

Loading a quantized model

To load a quantized model hosted locally or on the 🤗 hub, you must instantiate you model using our INCModelForXxx classes.

from optimum.intel.neural_compressor import INCModelForSequenceClassification

model_name = "Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic"
model = INCModelForSequenceClassification.from_pretrained(model_name)

You can load many more quantized models hosted on the hub under the Intel organization here.

Inference with Transformers pipeline

The quantized model can then easily be used to run inference with the Transformers pipelines.

from transformers import AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe_cls = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "He's a dreadful magician."
outputs = pipe_cls(text)

[{'label': 'NEGATIVE', 'score': 0.9880216121673584}]

Check out the examples directory for more sophisticated usage.