The LeViT model was proposed in LeViT: Introducing Convolutions to Vision Transformers by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. LeViT improves the Vision Transformer (ViT) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information.
The abstract from the paper is the following:
We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU.
LeViT Architecture. Taken from the original paper.
Tips:
This model was contributed by anugunj. The original code can be found here.
( image_size = 224 num_channels = 3 kernel_size = 3 stride = 2 padding = 1 patch_size = 16 hidden_sizes = [128, 256, 384] num_attention_heads = [4, 8, 12] depths = [4, 4, 4] key_dim = [16, 16, 16] drop_path_rate = 0 mlp_ratio = [2, 2, 2] attention_ratio = [2, 2, 2] initializer_range = 0.02 **kwargs )
Parameters
int, optional, defaults to 224) —
The size of the input image.
int, optional, defaults to 3) —
Number of channels in the input image.
int, optional, defaults to 3) —
The kernel size for the initial convolution layers of patch embedding.
int, optional, defaults to 2) —
The stride size for the initial convolution layers of patch embedding.
int, optional, defaults to 1) —
The padding size for the initial convolution layers of patch embedding.
int, optional, defaults to 16) —
The patch size for embeddings.
List[int], optional, defaults to [128, 256, 384]) —
Dimension of each of the encoder blocks.
List[int], optional, defaults to [4, 8, 12]) —
Number of attention heads for each attention layer in each block of the Transformer encoder.
List[int], optional, defaults to [4, 4, 4]) —
The number of layers in each encoder block.
List[int], optional, defaults to [16, 16, 16]) —
The size of key in each of the encoder blocks.
int, optional, defaults to 0) —
The dropout probability for stochastic depths, used in the blocks of the Transformer encoder.
List[int], optional, defaults to [2, 2, 2]) —
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
List[int], optional, defaults to [2, 2, 2]) —
Ratio of the size of the output dimension compared to input dimension of attention layers.
float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
This is the configuration class to store the configuration of a LevitModel. It is used to instantiate a LeViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the LeViT facebook/levit-128S architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import LevitConfig, LevitModel
>>> # Initializing a LeViT levit-128S style configuration
>>> configuration = LevitConfig()
>>> # Initializing a model (with random weights) from the levit-128S style configuration
>>> model = LevitModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config( do_resize = True size = 224 resample = <Resampling.BICUBIC: 3> do_center_crop = True do_normalize = True image_mean = [0.485, 0.456, 0.406] image_std = [0.229, 0.224, 0.225] **kwargs )
Parameters
bool, optional, defaults to True) —
Whether to resize the shortest edge of the input to int(256/224 *size).
int or Tuple(int), optional, defaults to 224) —
Resize the input to the given size. If a tuple is provided, it should be (width, height). If only an
integer is provided, then shorter side of input will be resized to ‘size’.
int, optional, defaults to PIL.Image.Resampling.BICUBIC) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST,
PIL.Image.Resampling.BOX, PIL.Image.Resampling.BILINEAR, PIL.Image.Resampling.HAMMING,
PIL.Image.Resampling.BICUBIC or PIL.Image.Resampling.LANCZOS. Only has an effect if do_resize is set
to True.
bool, optional, defaults to True) —
Whether or not to center crop the input to size.
bool, optional, defaults to True) —
Whether or not to normalize the input with mean and standard deviation.
List[int], defaults to [0.229, 0.224, 0.225]) —
The sequence of means for each channel, to be used when normalizing images.
List[int], defaults to [0.485, 0.456, 0.406]) —
The sequence of standard deviations for each channel, to be used when normalizing images.
Constructs a LeViT feature extractor.
This feature extractor inherits from FeatureExtractionMixin which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → BatchFeature
Parameters
PIL.Image.Image, np.ndarray, torch.Tensor, List[PIL.Image.Image], List[np.ndarray], List[torch.Tensor]) —
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
str or TensorType, optional, defaults to 'np') —
If set, will return tensors of a particular framework. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.'pt': Return PyTorch torch.Tensor objects.'np': Return NumPy np.ndarray objects.'jax': Return JAX jnp.ndarray objects.Returns
A BatchFeature with the following fields:
Main method to prepare for the model one or several image(s).
NumPy arrays and PyTorch tensors are converted to PIL images when resizing, so the most efficient is to pass PIL images.
( config )
Parameters
The bare Levit model outputting raw features without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoFeatureExtractor. See
AutoFeatureExtractor.__call__() for details.
bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LevitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The LevitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import LevitFeatureExtractor, LevitModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = LevitFeatureExtractor.from_pretrained("facebook/levit-128S")
>>> model = LevitModel.from_pretrained("facebook/levit-128S")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 16, 384]( config )
Parameters
Levit Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values: FloatTensor = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoFeatureExtractor. See
AutoFeatureExtractor.__call__() for details.
bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LevitConfig) and inputs.
torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.The LevitForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import LevitFeatureExtractor, LevitForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = LevitFeatureExtractor.from_pretrained("facebook/levit-128S")
>>> model = LevitForImageClassification.from_pretrained("facebook/levit-128S")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat( config )
Parameters
LeViT Model transformer with image classification heads on top (a linear layer on top of the final hidden state and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning:: This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported.
This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.levit.modeling_levit.LevitForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoFeatureExtractor. See
AutoFeatureExtractor.__call__() for details.
bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.levit.modeling_levit.LevitForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
A transformers.models.levit.modeling_levit.LevitForImageClassificationWithTeacherOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LevitConfig) and inputs.
torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation_logits.torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the
class token).torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the
distillation token).tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.The LevitForImageClassificationWithTeacher forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = LevitFeatureExtractor.from_pretrained("facebook/levit-128S")
>>> model = LevitForImageClassificationWithTeacher.from_pretrained("facebook/levit-128S")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat