Search is not available for this dataset
Models
The base classes [PreTrainedModel], [TFPreTrainedModel], and
[FlaxPreTrainedModel] implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace's AWS
S3 repository).
[PreTrainedModel] and [TFPreTrainedModel] also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in [~modeling_utils.ModuleUtilsMixin]
(for the PyTorch models) and [~modeling_tf_utils.TFModuleUtilsMixin] (for the TensorFlow models) or
for text generation, [~generation.GenerationMixin] (for the PyTorch models),
[~generation.TFGenerationMixin] (for the TensorFlow models) and
[~generation.FlaxGenerationMixin] (for the Flax/JAX models).
PreTrainedModel
[[autodoc]] PreTrainedModel
- push_to_hub
- all
Large model loading
In Transformers 4.20.0, the [~PreTrainedModel.from_pretrained] method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded.
This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only.
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True)
Moreover, you can directly place the model on different devices if it doesn't fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don't have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect.
When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don't need to specify it:
from transformers import AutoModelForSeq2SeqLM
t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto")
You can inspect how the model was split across devices by looking at its hf_device_map attribute:
py
t0pp.hf_device_map
python out
{'shared': 0,
'decoder.embed_tokens': 0,
'encoder': 0,
'decoder.block.0': 0,
'decoder.block.1': 1,
'decoder.block.2': 1,
'decoder.block.3': 1,
'decoder.block.4': 1,
'decoder.block.5': 1,
'decoder.block.6': 1,
'decoder.block.7': 1,
'decoder.block.8': 1,
'decoder.block.9': 1,
'decoder.block.10': 1,
'decoder.block.11': 1,
'decoder.block.12': 1,
'decoder.block.13': 1,
'decoder.block.14': 1,
'decoder.block.15': 1,
'decoder.block.16': 1,
'decoder.block.17': 1,
'decoder.block.18': 1,
'decoder.block.19': 1,
'decoder.block.20': 1,
'decoder.block.21': 1,
'decoder.block.22': 'cpu',
'decoder.block.23': 'cpu',
'decoder.final_layer_norm': 'cpu',
'decoder.dropout': 'cpu',
'lm_head': 'cpu'}
You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don't have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory):
python
device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1}
Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below.
Model Instantiation dtype
Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to
load a model whose weights are in fp16, since it'd require twice as much memory. To overcome this limitation, you can
either explicitly pass the desired dtype using torch_dtype argument:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto",
and then dtype will be automatically derived from the model's weights:
python
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
Models instantiated from scratch can also be told which dtype to use with:
python
config = T5Config.from_pretrained("t5")
model = AutoModel.from_config(config)
Due to Pytorch design, this functionality is only available for floating dtypes.
ModuleUtilsMixin
[[autodoc]] modeling_utils.ModuleUtilsMixin
TFPreTrainedModel
[[autodoc]] TFPreTrainedModel
- push_to_hub
- all
TFModelUtilsMixin
[[autodoc]] modeling_tf_utils.TFModelUtilsMixin
FlaxPreTrainedModel
[[autodoc]] FlaxPreTrainedModel
- push_to_hub
- all
Pushing to the Hub
[[autodoc]] utils.PushToHubMixin
Sharded checkpoints
[[autodoc]] modeling_utils.load_sharded_checkpoint
stringlengths 161
226k
⌀ |
---|
Perplexity of fixed-length models
[[open-in-colab]]
Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
models) and is not well defined for masked language models like BERT (see summary of the models).
Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized
sequence \(X = (x_0, x_1, \dots, x_t)\), then the perplexity of \(X\) is,
$$\text{PPL}(X) = \exp \left{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right}$$
where \(\log p_\theta (x_i|x_{<i})\) is the log-likelihood of the ith token conditioned on the preceding tokens \(x_{<i}\) according to our model. Intuitively, it can be thought of as an evaluation of the model's ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a model's perplexity which should always be taken into consideration when comparing different models.
This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more
intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this
fantastic blog post on The Gradient.
Calculating PPL with fixed-length models
If we weren't limited by a model's context size, we would evaluate the model's perplexity by autoregressively
factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below.
When working with approximate models, however, we typically have a constraint on the number of tokens the model can
process. The largest version of GPT-2, for example, has a fixed length of 1024 tokens, so we
cannot calculate \(p_\theta(x_t|x_{<t})\) directly when \(t\) is greater than 1024.
Instead, the sequence is typically broken into subsequences equal to the model's maximum input size. If a model's max
input size is \(k\), we then approximate the likelihood of a token \(x_t\) by conditioning only on the
\(k-1\) tokens that precede it rather than the entire context. When evaluating the model's perplexity of a
sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed
log-likelihoods of each segment independently.
This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor
approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will
have less context at most of the prediction steps.
Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly
sliding the context window so that the model has more context when making each prediction.
This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more
favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good
practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by
1 token a time. This allows computation to proceed much faster while still giving the model a large context to make
predictions at each step.
Example: Calculating perplexity with GPT-2 in 🤗 Transformers
Let's demonstrate this process with GPT-2.
thon
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
device = "cuda"
model_id = "gpt2-large"
model = GPT2LMHeadModel.from_pretrained(model_id).to(device)
tokenizer = GPT2TokenizerFast.from_pretrained(model_id)
We'll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since
this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire
dataset in memory.
thon
from datasets import load_dataset
test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test")
encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt")
With 🤗 Transformers, we can simply pass the input_ids as the labels to our model, and the average negative
log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in
the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating
as context to be included in our loss, so we can set these targets to -100 so that they are ignored. The following
is an example of how we could do this with a stride of 512. This means that the model will have at least 512 tokens
for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens
available to condition on).
thon
import torch
from tqdm import tqdm
max_length = model.config.n_positions
stride = 512
seq_len = encodings.input_ids.size(1)
nlls = []
prev_end_loc = 0
for begin_loc in tqdm(range(0, seq_len, stride)):
end_loc = min(begin_loc + max_length, seq_len)
trg_len = end_loc - prev_end_loc # may be different from stride on last loop
input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device)
target_ids = input_ids.clone()
target_ids[:, :-trg_len] = -100
with torch.no_grad():
outputs = model(input_ids, labels=target_ids)
# loss is calculated using CrossEntropyLoss which averages over valid labels
# N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels
# to the left by 1.
neg_log_likelihood = outputs.loss
nlls.append(neg_log_likelihood)
prev_end_loc = end_loc
if end_loc == seq_len:
break
ppl = torch.exp(torch.stack(nlls).mean())
Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window
strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction,
and the better the reported perplexity will typically be.
When we run the above with stride = 1024, i.e. no overlap, the resulting PPL is 19.44, which is about the same
as the 19.93 reported in the GPT-2 paper. By using stride = 512 and thereby employing our striding window
strategy, this jumps down to 16.45. This is not only a more favorable score, but is calculated in a way that is
closer to the true autoregressive decomposition of a sequence likelihood. |
Before you begin, make sure you have all the necessary libraries installed:
pip install -q datasets transformers evaluate
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
from huggingface_hub import notebook_login
notebook_login()
Load SceneParse150 dataset
Start by loading a smaller subset of the SceneParse150 dataset from the 🤗 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
from datasets import load_dataset
ds = load_dataset("scene_parse_150", split="train[:50]")
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
ds = ds.train_test_split(test_size=0.2)
train_ds = ds["train"]
test_ds = ds["test"]
Then take a look at an example:
train_ds[0]
{'image': ,
'annotation': ,
'scene_category': 368}
image: a PIL image of the scene.
annotation: a PIL image of the segmentation map, which is also the model's target.
scene_category: a category id that describes the image scene like "kitchen" or "office". In this guide, you'll only need image and annotation, both of which are PIL images.
You'll also want to create a dictionary that maps a label id to a label class which will be useful when you set up the model later. Download the mappings from the Hub and create the id2label and label2id dictionaries:
import json
from huggingface_hub import cached_download, hf_hub_url
repo_id = "huggingface/label-files"
filename = "ade20k-id2label.json"
id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)
Preprocess
The next step is to load a SegFormer image processor to prepare the images and annotations for the model. Some datasets, like this one, use the zero-index as the background class. However, the background class isn't actually included in the 150 classes, so you'll need to set reduce_labels=True to subtract one from all the labels. The zero-index is replaced by 255 so it's ignored by SegFormer's loss function:
from transformers import AutoImageProcessor
checkpoint = "nvidia/mit-b0"
image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True)
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting. In this guide, you'll use the ColorJitter function from torchvision to randomly change the color properties of an image, but you can also use any image library you like.
from torchvision.transforms import ColorJitter
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
Now create two preprocessing functions to prepare the images and annotations for the model. These functions convert the images into pixel_values and annotations to labels. For the training set, jitter is applied before providing the images to the image processor. For the test set, the image processor crops and normalizes the images, and only crops the labels because no data augmentation is applied during testing.
def train_transforms(example_batch):
images = [jitter(x) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [x for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
To apply the jitter over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.set_transform] function. The transform is applied on the fly which is faster and consumes less disk space:
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
It is common to apply some data augmentations to an image dataset to make a model more robust against overfitting.
In this guide, you'll use tf.image to randomly change the color properties of an image, but you can also use any image
library you like.
Define two separate transformation functions:
- training data transformations that include image augmentation
- validation data transformations that only transpose the images, since computer vision models in 🤗 Transformers expect channels-first layout
import tensorflow as tf
def aug_transforms(image):
image = tf.keras.utils.img_to_array(image)
image = tf.image.random_brightness(image, 0.25)
image = tf.image.random_contrast(image, 0.5, 2.0)
image = tf.image.random_saturation(image, 0.75, 1.25)
image = tf.image.random_hue(image, 0.1)
image = tf.transpose(image, (2, 0, 1))
return image
def transforms(image):
image = tf.keras.utils.img_to_array(image)
image = tf.transpose(image, (2, 0, 1))
return image
Next, create two preprocessing functions to prepare batches of images and annotations for the model. These functions apply
the image transformations and use the earlier loaded image_processor to convert the images into pixel_values and
annotations to labels. ImageProcessor also takes care of resizing and normalizing the images.
def train_transforms(example_batch):
images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [transforms(x.convert("RGB")) for x in example_batch["image"]]
labels = [x for x in example_batch["annotation"]]
inputs = image_processor(images, labels)
return inputs
To apply the preprocessing transformations over the entire dataset, use the 🤗 Datasets [~datasets.Dataset.set_transform] function.
The transform is applied on the fly which is faster and consumes less disk space:
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the mean Intersection over Union (IoU) metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
metric = evaluate.load("mean_iou")
Then create a function to [~evaluate.EvaluationModule.compute] the metrics. Your predictions need to be converted to
logits first, and then reshaped to match the size of the labels before you can call [~evaluate.EvaluationModule.compute]:
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1)
pred_labels = logits_tensor.detach().cpu().numpy()
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=num_labels,
ignore_index=255,
reduce_labels=False,
)
for key, value in metrics.items():
if type(value) is np.ndarray:
metrics[key] = value.tolist()
return metrics
def compute_metrics(eval_pred):
logits, labels = eval_pred
logits = tf.transpose(logits, perm=[0, 2, 3, 1])
logits_resized = tf.image.resize(
logits,
size=tf.shape(labels)[1:],
method="bilinear",
)
pred_labels = tf.argmax(logits_resized, axis=-1)
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=num_labels,
ignore_index=-1,
reduce_labels=image_processor.do_reduce_labels,
)
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return {"val_" + k: v for k, v in metrics.items()}
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load SegFormer with [AutoModelForSemanticSegmentation], and pass the model the mapping between label ids and label classes:
from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer
model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id)
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. It is important you don't remove unused columns because this'll drop the image column. Without the image column, you can't create pixel_values. Set remove_unused_columns=False to prevent this behavior! The only other required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the IoU metric and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model.
training_args = TrainingArguments(
output_dir="segformer-b0-scene-parse-150",
learning_rate=6e-5,
num_train_epochs=50,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
save_total_limit=3,
evaluation_strategy="steps",
save_strategy="steps",
save_steps=20,
eval_steps=20,
logging_steps=1,
eval_accumulation_steps=5,
remove_unused_columns=False,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you are unfamiliar with fine-tuning a model with Keras, check out the basic tutorial first!
To fine-tune a model in TensorFlow, follow these steps:
1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule.
2. Instantiate a pretrained model.
3. Convert a 🤗 Dataset to a tf.data.Dataset.
4. Compile your model.
5. Add callbacks to calculate metrics and upload your model to 🤗 Hub
6. Use the fit() method to run the training.
Start by defining the hyperparameters, optimizer and learning rate schedule:
from transformers import create_optimizer
batch_size = 2
num_epochs = 50
num_train_steps = len(train_ds) * num_epochs
learning_rate = 6e-5
weight_decay_rate = 0.01
optimizer, lr_schedule = create_optimizer(
init_lr=learning_rate,
num_train_steps=num_train_steps,
weight_decay_rate=weight_decay_rate,
num_warmup_steps=0,
)
Then, load SegFormer with [TFAutoModelForSemanticSegmentation] along with the label mappings, and compile it with the
optimizer. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
)
model.compile(optimizer=optimizer) # No loss argument!
Convert your datasets to the tf.data.Dataset format using the [~datasets.Dataset.to_tf_dataset] and the [DefaultDataCollator]:
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator(return_tensors="tf")
tf_train_dataset = train_ds.to_tf_dataset(
columns=["pixel_values", "label"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
tf_eval_dataset = test_ds.to_tf_dataset(
columns=["pixel_values", "label"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
To compute the accuracy from the predictions and push your model to the 🤗 Hub, use Keras callbacks.
Pass your compute_metrics function to [KerasMetricCallback],
and use the [PushToHubCallback] to upload the model:
from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback
metric_callback = KerasMetricCallback(
metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"]
)
push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor)
callbacks = [metric_callback, push_to_hub_callback]
Finally, you are ready to train your model! Call fit() with your training and validation datasets, the number of epochs,
and your callbacks to fine-tune the model:
model.fit(
tf_train_dataset,
validation_data=tf_eval_dataset,
callbacks=callbacks,
epochs=num_epochs,
)
Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference!
Inference
Great, now that you've finetuned a model, you can use it for inference!
Load an image for inference:
image = ds[0]["image"]
image
The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for image segmentation with your model, and pass your image to it:
from transformers import pipeline
segmenter = pipeline("image-segmentation", model="my_awesome_seg_model")
segmenter(image)
[{'score': None,
'label': 'wall',
'mask': },
{'score': None,
'label': 'sky',
'mask': },
{'score': None,
'label': 'floor',
'mask': },
{'score': None,
'label': 'ceiling',
'mask': },
{'score': None,
'label': 'bed ',
'mask': },
{'score': None,
'label': 'windowpane',
'mask': },
{'score': None,
'label': 'cabinet',
'mask': },
{'score': None,
'label': 'chair',
'mask': },
{'score': None,
'label': 'armchair',
'mask': }]
You can also manually replicate the results of the pipeline if you'd like. Process the image with an image processor and place the pixel_values on a GPU:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # use GPU if available, otherwise use a CPU
encoding = image_processor(image, return_tensors="pt")
pixel_values = encoding.pixel_values.to(device)
Pass your input to the model and return the logits:
outputs = model(pixel_values=pixel_values)
logits = outputs.logits.cpu()
Next, rescale the logits to the original image size:
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
Load an image processor to preprocess the image and return the input as TensorFlow tensors:
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation")
inputs = image_processor(image, return_tensors="tf")
Pass your input to the model and return the logits:
from transformers import TFAutoModelForSemanticSegmentation
model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation")
logits = model(**inputs).logits
Next, rescale the logits to the original image size and apply argmax on the class dimension:
logits = tf.transpose(logits, [0, 2, 3, 1])
upsampled_logits = tf.image.resize(
logits,
# We reverse the shape of image because image.size returns width and height.
image.size[::-1],
)
pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0]
To visualize the results, load the dataset color palette as ade_palette() that maps each class to their RGB values. Then you can combine and plot your image and the predicted segmentation map:
import matplotlib.pyplot as plt
import numpy as np
color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8)
palette = np.array(ade_palette())
for label, color in enumerate(palette):
color_seg[pred_seg == label, :] = color
color_seg = color_seg[, ::-1] # convert to BGR
img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map
img = img.astype(np.uint8)
plt.figure(figsize=(15, 10))
plt.imshow(img)
plt.show()
|
Before you begin, make sure you have all the necessary libraries installed:
pip install transformers datasets evaluate
We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:
from huggingface_hub import notebook_login
notebook_login()
Load SWAG dataset
Start by loading the regular configuration of the SWAG dataset from the 🤗 Datasets library:
from datasets import load_dataset
swag = load_dataset("swag", "regular")
Then take a look at an example:
swag["train"][0]
{'ending0': 'passes by walking down the street playing their instruments.',
'ending1': 'has heard approaching them.',
'ending2': "arrives and they're outside dancing and asleep.",
'ending3': 'turns the lead singer watches the performance.',
'fold-ind': '3416',
'gold-source': 'gold',
'label': 0,
'sent1': 'Members of the procession walk down the street holding small horn brass instruments.',
'sent2': 'A drum line',
'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line',
'video-id': 'anetv_jkn6uvmqwh4'}
While it looks like there are a lot of fields here, it is actually pretty straightforward:
sent1 and sent2: these fields show how a sentence starts, and if you put the two together, you get the startphrase field.
ending: suggests a possible ending for how a sentence can end, but only one of them is correct.
label: identifies the correct sentence ending.
Preprocess
The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
The preprocessing function you want to create needs to:
Make four copies of the sent1 field and combine each of them with sent2 to recreate how a sentence starts.
Combine sent2 with each of the four possible sentence endings.
Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding input_ids, attention_mask, and labels field.
ending_names = ["ending0", "ending1", "ending2", "ending3"]
def preprocess_function(examples):
first_sentences = [[context] * 4 for context in examples["sent1"]]
question_headers = examples["sent2"]
second_sentences = [
[f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers)
]
first_sentences = sum(first_sentences, [])
second_sentences = sum(second_sentences, [])
tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once:
py
tokenized_swag = swag.map(preprocess_function, batched=True)
🤗 Transformers doesn't have a data collator for multiple choice, so you'll need to adapt the [DataCollatorWithPadding] to create a batch of examples. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
DataCollatorForMultipleChoice flattens all the model inputs, applies padding, and then unflattens the results:
from dataclasses import dataclass
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from typing import Optional, Union
import torch
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def call(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
batch["labels"] = torch.tensor(labels, dtype=torch.int64)
return batch
</pt>
<tf>py
from dataclasses import dataclass
from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy
from typing import Optional, Union
import tensorflow as tf
@dataclass
class DataCollatorForMultipleChoice:
"""
Data collator that will dynamically pad the inputs for multiple choice received.
"""
tokenizer: PreTrainedTokenizerBase
padding: Union[bool, str, PaddingStrategy] = True
max_length: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
def call(self, features):
label_name = "label" if "label" in features[0].keys() else "labels"
labels = [feature.pop(label_name) for feature in features]
batch_size = len(features)
num_choices = len(features[0]["input_ids"])
flattened_features = [
[{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
]
flattened_features = sum(flattened_features, [])
batch = self.tokenizer.pad(
flattened_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="tf",
)
batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()}
batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64)
return batch
Evaluate
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the accuracy metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric):
import evaluate
accuracy = evaluate.load("accuracy")
Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy:
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return accuracy.compute(predictions=predictions, references=labels)
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training.
Train
If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here!
You're ready to start training your model now! Load BERT with [AutoModelForMultipleChoice]:
from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer
model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased")
At this point, only three steps remain:
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the accuracy and save the training checkpoint.
Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function.
Call [~Trainer.train] to finetune your model.
training_args = TrainingArguments(
output_dir="my_awesome_swag_model",
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
learning_rate=5e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_swag["train"],
eval_dataset=tokenized_swag["validation"],
tokenizer=tokenizer,
data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
compute_metrics=compute_metrics,
)
trainer.train()
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model:
trainer.push_to_hub()
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here!
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
from transformers import create_optimizer
batch_size = 16
num_train_epochs = 2
total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs
optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
Then you can load BERT with [TFAutoModelForMultipleChoice]:
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased")
Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer)
tf_train_set = model.prepare_tf_dataset(
tokenized_swag["train"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator,
)
tf_validation_set = model.prepare_tf_dataset(
tokenized_swag["validation"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator,
)
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
model.compile(optimizer=optimizer) # No loss argument!
The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks.
Pass your compute_metrics function to [~transformers.KerasMetricCallback]:
from transformers.keras_callbacks import KerasMetricCallback
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
Specify where to push your model and tokenizer in the [~transformers.PushToHubCallback]:
from transformers.keras_callbacks import PushToHubCallback
push_to_hub_callback = PushToHubCallback(
output_dir="my_awesome_model",
tokenizer=tokenizer,
)
Then bundle your callbacks together:
callbacks = [metric_callback, push_to_hub_callback]
Finally, you're ready to start training your model! Call fit with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks)
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
For a more in-depth example of how to finetune a model for multiple choice, take a look at the corresponding
PyTorch notebook
or TensorFlow notebook.
Inference
Great, now that you've finetuned a model, you can use it for inference!
Come up with some text and two candidate answers:
prompt = "France has a bread law, Le Décret Pain, with strict rules on what is allowed in a traditional baguette."
candidate1 = "The law does not apply to croissants and brioche."
candidate2 = "The law applies to baguettes."
Tokenize each prompt and candidate answer pair and return PyTorch tensors. You should also create some labels:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True)
labels = torch.tensor(0).unsqueeze(0)
Pass your inputs and labels to the model and return the logits:
from transformers import AutoModelForMultipleChoice
model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels)
logits = outputs.logits
Get the class with the highest probability:
predicted_class = logits.argmax().item()
predicted_class
'0'
Tokenize each prompt and candidate answer pair and return TensorFlow tensors:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model")
inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True)
Pass your inputs to the model and return the logits:
from transformers import TFAutoModelForMultipleChoice
model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model")
inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()}
outputs = model(inputs)
logits = outputs.logits
Get the class with the highest probability:
predicted_class = int(tf.math.argmax(logits, axis=-1)[0])
predicted_class
'0'
|
Utilities for pipelines
This page lists all the utility functions the library provides for pipelines.
Most of those are only useful if you are studying the code of the models in the library.
Argument handling
[[autodoc]] pipelines.ArgumentHandler
[[autodoc]] pipelines.ZeroShotClassificationArgumentHandler
[[autodoc]] pipelines.QuestionAnsweringArgumentHandler
Data format
[[autodoc]] pipelines.PipelineDataFormat
[[autodoc]] pipelines.CsvPipelineDataFormat
[[autodoc]] pipelines.JsonPipelineDataFormat
[[autodoc]] pipelines.PipedPipelineDataFormat
Utilities
[[autodoc]] pipelines.PipelineException |
Train with a script
Along with the 🤗 Transformers notebooks, there are also example scripts demonstrating how to train a model for a task with PyTorch, TensorFlow, or JAX/Flax.
You will also find scripts we've used in our research projects and legacy examples which are mostly community contributed. These scripts are not actively maintained and require a specific version of 🤗 Transformers that will most likely be incompatible with the latest version of the library.
The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you're trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case.
For any feature you'd like to implement in an example script, please discuss it on the forum or in an issue before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability.
This guide will show you how to run an example summarization training script in PyTorch and TensorFlow. All examples are expected to work with both frameworks unless otherwise specified.
Setup
To successfully run the latest version of the example scripts, you have to install 🤗 Transformers from source in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
For older versions of the example scripts, click on the toggle below:
Examples for older versions of 🤗 Transformers
v4.5.1
v4.4.2
v4.3.3
v4.2.2
v4.1.1
v4.0.1
v3.5.1
v3.4.0
v3.3.1
v3.2.0
v3.1.0
v3.0.2
v2.11.0
v2.10.0
v2.9.1
v2.8.0
v2.7.0
v2.6.0
v2.5.1
v2.4.0
v2.3.0
v2.2.0
v2.1.1
v2.0.0
v1.2.0
v1.1.0
v1.0.0
Then switch your current clone of 🤗 Transformers to a specific version, like v3.5.1 for example:
git checkout tags/v3.5.1
After you've setup the correct library version, navigate to the example folder of your choice and install the example specific requirements:
pip install -r requirements.txt
Run a script
The example script downloads and preprocesses a dataset from the 🤗 Datasets library. Then the script fine-tunes a dataset with the Trainer on an architecture that supports summarization. The following example shows how to fine-tune T5-small on the CNN/DailyMail dataset. The T5 model requires an additional source_prefix argument due to how it was trained. This prompt lets T5 know this is a summarization task.
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
The example script downloads and preprocesses a dataset from the 🤗 Datasets library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune T5-small on the CNN/DailyMail dataset. The T5 model requires an additional source_prefix argument due to how it was trained. This prompt lets T5 know this is a summarization task.
python examples/tensorflow/summarization/run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
Distributed training and mixed precision
The Trainer supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features:
Add the fp16 argument to enable mixed precision.
Set the number of GPUs to use with the nproc_per_node argument.
python -m torch.distributed.launch \
--nproc_per_node 8 pytorch/summarization/run_summarization.py \
--fp16 \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
TensorFlow scripts utilize a MirroredStrategy for distributed training, and you don't need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available.
Run a script on a TPU
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the XLA deep learning compiler (see here for more details). To use a TPU, launch the xla_spawn.py script and use the num_cores argument to set the number of TPU cores you want to use.
python xla_spawn.py --num_cores 8 \
summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a TPUStrategy for training on TPUs. To use a TPU, pass the name of the TPU resource to the tpu argument.
python run_summarization.py \
--tpu name_of_tpu_resource \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 16 \
--num_train_epochs 3 \
--do_train \
--do_eval
Run a script with 🤗 Accelerate
🤗 Accelerate is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have 🤗 Accelerate installed if you don't already have it:
Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts
pip install git+https://github.com/huggingface/accelerate
Instead of the run_summarization.py script, you need to use the run_summarization_no_trainer.py script. 🤗 Accelerate supported scripts will have a task_no_trainer.py file in the folder. Begin by running the following command to create and save a configuration file:
accelerate config
Test your setup to make sure it is configured correctly:
accelerate test
Now you are ready to launch the training:
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
Use a custom dataset
The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments:
train_file and validation_file specify the path to your training and validation files.
text_column is the input text to summarize.
summary_column is the target text to output.
A summarization script using a custom dataset would look like this:
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--train_file path_to_csv_or_jsonlines_file \
--validation_file path_to_csv_or_jsonlines_file \
--text_column text_column_name \
--summary_column summary_column_name \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--predict_with_generate
Test a script
It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples:
max_train_samples
max_eval_samples
max_predict_samples
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--max_train_samples 50 \
--max_eval_samples 50 \
--max_predict_samples 50 \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
Not all example scripts support the max_predict_samples argument. If you aren't sure whether your script supports this argument, add the -h argument to check:
examples/pytorch/summarization/run_summarization.py -h
Resume training from checkpoint
Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint.
The first method uses the output_dir previous_output_dir argument to resume training from the latest checkpoint stored in output_dir. In this case, you should remove overwrite_output_dir:
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--output_dir previous_output_dir \
--predict_with_generate
The second method uses the resume_from_checkpoint path_to_specific_checkpoint argument to resume training from a specific checkpoint folder.
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--resume_from_checkpoint path_to_specific_checkpoint \
--predict_with_generate
Share your model
All scripts can upload your final model to the Model Hub. Make sure you are logged into Hugging Face before you begin:
huggingface-cli login
Then add the push_to_hub argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in output_dir.
To give your repository a specific name, use the push_to_hub_model_id argument to add it. The repository will be automatically listed under your namespace.
The following example shows how to upload a model with a specific repository name:
python examples/pytorch/summarization/run_summarization.py
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--push_to_hub \
--push_to_hub_model_id finetuned-t5-cnn_dailymail \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate |
Padding and truncation
Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special padding token to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences.
In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: padding, truncation and max_length.
The padding argument controls padding. It can be a boolean or a string:
True or 'longest': pad to the longest sequence in the batch (no padding is applied if you only provide
a single sequence).
'max_length': pad to a length specified by the max_length argument or the maximum length accepted
by the model if no max_length is provided (max_length=None). Padding will still be applied if you only provide a single sequence.
False or 'do_not_pad': no padding is applied. This is the default behavior.
The truncation argument controls truncation. It can be a boolean or a string:
True or 'longest_first': truncate to a maximum length specified by the max_length argument or
the maximum length accepted by the model if no max_length is provided (max_length=None). This will
truncate token by token, removing a token from the longest sequence in the pair until the proper length is
reached.
'only_second': truncate to a maximum length specified by the max_length argument or the maximum
length accepted by the model if no max_length is provided (max_length=None). This will only truncate
the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided.
'only_first': truncate to a maximum length specified by the max_length argument or the maximum
length accepted by the model if no max_length is provided (max_length=None). This will only truncate
the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided.
False or 'do_not_truncate': no truncation is applied. This is the default behavior.
The max_length argument controls the length of the padding and truncation. It can be an integer or None, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to max_length is deactivated.
The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace truncation=True by a STRATEGY selected in
['only_first', 'only_second', 'longest_first'], i.e. truncation='only_second' or truncation='longest_first' to control how both sequences in the pair are truncated as detailed before.
| Truncation | Padding | Instruction |
|--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------|
| no truncation | no padding | tokenizer(batch_sentences) |
| | padding to max sequence in batch | tokenizer(batch_sentences, padding=True) or |
| | | tokenizer(batch_sentences, padding='longest') |
| | padding to max model input length | tokenizer(batch_sentences, padding='max_length') |
| | padding to specific length | tokenizer(batch_sentences, padding='max_length', max_length=42) |
| | padding to a multiple of a value | tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) |
| truncation to max model input length | no padding |tokenizer(batch_sentences, truncation=True)or |
| | |tokenizer(batch_sentences, truncation=STRATEGY)|
| | padding to max sequence in batch |tokenizer(batch_sentences, padding=True, truncation=True)or |
| | |tokenizer(batch_sentences, padding=True, truncation=STRATEGY)|
| | padding to max model input length |tokenizer(batch_sentences, padding='max_length', truncation=True)or |
| | |tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)|
| | padding to specific length | Not possible |
| truncation to specific length | no padding |tokenizer(batch_sentences, truncation=True, max_length=42)or |
| | |tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)|
| | padding to max sequence in batch |tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)or |
| | |tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)|
| | padding to max model input length | Not possible |
| | padding to specific length |tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)or |
| | |tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` | |
Preprocess
[[open-in-colab]]
Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. 🤗 Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you'll learn that for:
Text, use a Tokenizer to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors.
Speech and audio, use a Feature extractor to extract sequential features from audio waveforms and convert them into tensors.
Image inputs use a ImageProcessor to convert images into tensors.
Multimodal inputs, use a Processor to combine a tokenizer and a feature extractor or image processor.
AutoProcessor always works and automatically chooses the correct class for the model you're using, whether you're using a tokenizer, image processor, feature extractor or processor.
Before you begin, install 🤗 Datasets so you can load some datasets to experiment with:
pip install datasets
Natural Language Processing
The main tool for preprocessing textual data is a tokenizer. A tokenizer splits text into tokens according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer.
If you plan on using a pretrained model, it's important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the vocab) during pretraining.
Get started by loading a pretrained tokenizer with the [AutoTokenizer.from_pretrained] method. This downloads the vocab a model was pretrained with:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
Then pass your text to the tokenizer:
encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.")
print(encoded_input)
{'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
The tokenizer returns a dictionary with three important items:
input_ids are the indices corresponding to each token in the sentence.
attention_mask indicates whether a token should be attended to or not.
token_type_ids identifies which sequence a token belongs to when there is more than one sequence.
Return your input by decoding the input_ids:
tokenizer.decode(encoded_input["input_ids"])
'[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]'
As you can see, the tokenizer added two special tokens - CLS and SEP (classifier and separator) - to the sentence. Not all models need
special tokens, but if they do, the tokenizer automatically adds them for you.
If there are several sentences you want to preprocess, pass them as a list to the tokenizer:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_inputs = tokenizer(batch_sentences)
print(encoded_inputs)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1]]}
Pad
Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Set the padding parameter to True to pad the shorter sequences in the batch to match the longest sequence:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True)
print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
The first and third sentences are now padded with 0's because they are shorter.
Truncation
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length.
Set the truncation parameter to True to truncate a sequence to the maximum length accepted by the model:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True)
print(encoded_input)
{'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]],
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]}
Check out the Padding and truncation concept guide to learn more different padding and truncation arguments.
Build tensors
Finally, you want the tokenizer to return the actual tensors that get fed to the model.
Set the return_tensors parameter to either pt for PyTorch, or tf for TensorFlow:
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
print(encoded_input)
{'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102],
[101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])}
</pt>
<tf>py
batch_sentences = [
"But what about second breakfast?",
"Don't think he knows about second breakfast, Pip.",
"What about elevensies?",
]
encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
print(encoded_input)
{'input_ids': ,
'token_type_ids': ,
'attention_mask': }
Audio
For audio tasks, you'll need a feature extractor to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors.
Load the MInDS-14 dataset (see the 🤗 Datasets tutorial for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets:
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
Access the first element of the audio column to take a look at the input. Calling the audio column automatically loads and resamples the audio file:
dataset[0]["audio"]
{'array': array([ 0. , 0.00024414, -0.00024414, , -0.00024414,
0. , 0. ], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 8000}
This returns three items:
array is the speech signal loaded - and potentially resampled - as a 1D array.
path points to the location of the audio file.
sampling_rate refers to how many data points in the speech signal are measured per second.
For this tutorial, you'll use the Wav2Vec2 model. Take a look at the model card, and you'll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data's sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data's sampling rate isn't the same, then you need to resample your data.
Use 🤗 Datasets' [~datasets.Dataset.cast_column] method to upsample the sampling rate to 16kHz:
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
Call the audio column again to resample the audio file:
dataset[0]["audio"]
{'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ,
3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'sampling_rate': 16000}
Next, load a feature extractor to normalize and pad the input. When padding textual data, a 0 is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a 0 - interpreted as silence - to array.
Load the feature extractor with [AutoFeatureExtractor.from_pretrained]:
from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
Pass the audio array to the feature extractor. We also recommend adding the sampling_rate argument in the feature extractor in order to better debug any silent errors that may occur.
audio_input = [dataset[0]["audio"]["array"]]
feature_extractor(audio_input, sampling_rate=16000)
{'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ,
5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]}
Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples:
dataset[0]["audio"]["array"].shape
(173398,)
dataset[1]["audio"]["array"].shape
(106496,)
Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it:
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=16000,
padding=True,
max_length=100000,
truncation=True,
)
return inputs
Apply the preprocess_function to the the first few examples in the dataset:
processed_dataset = preprocess_function(dataset[:5])
The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now!
processed_dataset["input_values"][0].shape
(100000,)
processed_dataset["input_values"][1].shape
(100000,)
Computer vision
For computer vision tasks, you'll need an image processor to prepare your dataset for the model.
Image preprocessing consists of several steps that convert images into the input expected by the model. These steps
include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors.
Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation
transform image data, but they serve different purposes:
Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations.
Image preprocessing guarantees that the images match the model’s expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained.
You can use any library you like for image augmentation. For image preprocessing, use the ImageProcessor associated with the model.
Load the food101 dataset (see the 🤗 Datasets tutorial for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets:
Use 🤗 Datasets split parameter to only load a small sample from the training split since the dataset is quite large!
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:100]")
Next, take a look at the image with 🤗 Datasets Image feature:
dataset[0]["image"]
Load the image processor with [AutoImageProcessor.from_pretrained]:
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224")
First, let's add some image augmentation. You can use any library you prefer, but in this tutorial, we'll use torchvision's transforms module. If you're interested in using another data augmentation library, learn how in the Albumentations or Kornia notebooks.
Here we use Compose to chain together a couple of
transforms - RandomResizedCrop and ColorJitter.
Note that for resizing, we can get the image size requirements from the image_processor. For some models, an exact height and
width are expected, for others only the shortest_edge is defined.
from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose
size = (
image_processor.size["shortest_edge"]
if "shortest_edge" in image_processor.size
else (image_processor.size["height"], image_processor.size["width"])
)
_transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)])
The model accepts pixel_values
as its input. ImageProcessor can take care of normalizing the images, and generating appropriate tensors.
Create a function that combines image augmentation and image preprocessing for a batch of images and generates pixel_values:
def transforms(examples):
images = [_transforms(img.convert("RGB")) for img in examples["image"]]
examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"]
return examples
In the example above we set do_resize=False because we have already resized the images in the image augmentation transformation,
and leveraged the size attribute from the appropriate image_processor. If you do not resize images during image augmentation,
leave this parameter out. By default, ImageProcessor will handle the resizing.
If you wish to normalize images as a part of the augmentation transformation, use the image_processor.image_mean,
and image_processor.image_std values.
Then use 🤗 Datasets set_transform to apply the transforms on the fly:
dataset.set_transform(transforms)
Now when you access the image, you'll notice the image processor has added pixel_values. You can pass your processed dataset to the model now!
dataset[0].keys()
Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it's color properties are different.
import numpy as np
import matplotlib.pyplot as plt
img = dataset[0]["pixel_values"]
plt.imshow(img.permute(1, 2, 0))
For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, ImageProcessor
offers post processing methods. These methods convert model's raw outputs into meaningful predictions such as bounding boxes,
or segmentation maps.
Pad
In some cases, for instance, when fine-tuning DETR, the model applies scale augmentation at training
time. This may cause images to be different sizes in a batch. You can use [DetrImageProcessor.pad]
from [DetrImageProcessor] and define a custom collate_fn to batch images together.
def collate_fn(batch):
pixel_values = [item["pixel_values"] for item in batch]
encoding = image_processor.pad(pixel_values, return_tensors="pt")
labels = [item["labels"] for item in batch]
batch = {}
batch["pixel_values"] = encoding["pixel_values"]
batch["pixel_mask"] = encoding["pixel_mask"]
batch["labels"] = labels
return batch
Multimodal
For tasks involving multimodal inputs, you'll need a processor to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor.
Load the LJ Speech dataset (see the 🤗 Datasets tutorial for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR):
from datasets import load_dataset
lj_speech = load_dataset("lj_speech", split="train")
For ASR, you're mainly focused on audio and text so you can remove the other columns:
lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"])
Now take a look at the audio and text columns:
lj_speech[0]["audio"]
{'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ,
7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav',
'sampling_rate': 22050}
lj_speech[0]["text"]
'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition'
Remember you should always resample your audio dataset's sampling rate to match the sampling rate of the dataset used to pretrain a model!
lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000))
Load a processor with [AutoProcessor.from_pretrained]:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
Create a function to process the audio data contained in array to input_values, and tokenize text to labels. These are the inputs to the model:
def prepare_dataset(example):
audio = example["audio"]
example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000))
return example
Apply the prepare_dataset function to a sample:
prepare_dataset(lj_speech[0])
The processor has now added input_values and labels, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now! |
Training on Specialized Hardware
Note: Most of the strategies introduced in the single GPU section (such as mixed precision training or gradient accumulation) and multi-GPU section are generic and apply to training models in general so make sure to have a look at it before diving into this section.
This document will be completed soon with information on how to train on specialized hardware. |
Feature Extractor
A feature extractor is in charge of preparing input features for audio or vision models. This includes feature extraction
from sequences, e.g., pre-processing audio files to Log-Mel Spectrogram features, feature extraction from images
e.g. cropping image image files, but also padding, normalization, and conversion to Numpy, PyTorch, and TensorFlow
tensors.
FeatureExtractionMixin
[[autodoc]] feature_extraction_utils.FeatureExtractionMixin
- from_pretrained
- save_pretrained
SequenceFeatureExtractor
[[autodoc]] SequenceFeatureExtractor
- pad
BatchFeature
[[autodoc]] BatchFeature
ImageFeatureExtractionMixin
[[autodoc]] image_utils.ImageFeatureExtractionMixin |
ConvNeXt V2
Overview
The ConvNeXt V2 model was proposed in ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of ConvNeXT.
The abstract from the paper is the following:
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
Tips:
See the code examples below each model regarding usage.
ConvNeXt V2 architecture. Taken from the original paper.
This model was contributed by adirik. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2.
Image Classification
ConvNextV2ForImageClassification is supported by this example script and notebook.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextV2Config
class transformers.ConvNextV2Config
<
source
>
(
num_channels = 3
patch_size = 4
num_stages = 4
hidden_sizes = None
depths = None
hidden_act = 'gelu'
initializer_range = 0.02
layer_norm_eps = 1e-12
drop_path_rate = 0.0
image_size = 224
out_features = None
out_indices = None
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
patch_size (int, optional, defaults to 4) —
Patch size to use in the patch embedding layer.
num_stages (int, optional, defaults to 4) —
The number of stages in the model.
hidden_sizes (List[int], optional, defaults to [96, 192, 384, 768]) —
Dimensionality (hidden size) at each stage.
depths (List[int], optional, defaults to [3, 3, 9, 3]) —
Depth (number of blocks) for each stage.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in each block. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
drop_path_rate (float, optional, defaults to 0.0) —
The drop rate for stochastic depth.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a ConvNextV2Model. It is used to instantiate an
ConvNeXTV2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ConvNeXTV2
facebook/convnextv2-tiny-1k-224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ConvNeXTV2Config, ConvNextV2Model
# Initializing a ConvNeXTV2 convnextv2-tiny-1k-224 style configuration
configuration = ConvNeXTV2Config()
# Initializing a model (with random weights) from the convnextv2-tiny-1k-224 style configuration
model = ConvNextV2Model(configuration)
# Accessing the model configuration
configuration = model.config
ConvNextV2Model
class transformers.ConvNextV2Model
<
source
>
(
config
)
Parameters
config (ConvNextV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ConvNextV2 model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using ConvNextImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvNextV2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The ConvNextV2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ConvNextV2Model
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-1k-224")
model = ConvNextV2Model.from_pretrained("facebook/convnextv2-tiny-1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 768, 7, 7]
ConvNextV2ForImageClassification
class transformers.ConvNextV2ForImageClassification
<
source
>
(
config
)
Parameters
config (ConvNextV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvNextV2 Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using ConvNextImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvNextV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The ConvNextV2ForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ConvNextV2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-1k-224")
model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-tiny-1k-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←ConvNeXT
CvT→
ConvNeXt V2
Overview
Resources
ConvNextV2Config
ConvNextV2Model
ConvNextV2ForImageClassification
|
GPT-NeoX-Japanese
Overview
We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of https://github.com/EleutherAI/gpt-neox.
Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts.
To address this distinct structure of the Japanese language, we use a special sub-word tokenizer. We are very grateful to tanreinama for open-sourcing this incredibly helpful tokenizer.
Following the recommendations from Google’s research on PaLM, we have removed bias parameters from transformer blocks, achieving better model performance. Please refer this article in detail.
Development of the model was led by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori from ABEJA, Inc.. For more information on this model-building activity, please refer here (ja).
Generation
The generate() method can be used to generate text using GPT NeoX Japanese model.
Copied
from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b")
tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
prompt = "人とAIが協調するためには、"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0]
print(gen_text)
人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。
Documentation resources
Causal language modeling task guide
GPTNeoXJapaneseConfig
class transformers.GPTNeoXJapaneseConfig
<
source
>
(
vocab_size = 32000
hidden_size = 2560
num_hidden_layers = 32
num_attention_heads = 32
intermediate_multiple_size = 4
hidden_act = 'gelu'
rotary_pct = 1.0
rotary_emb_base = 10000
max_position_embeddings = 2048
initializer_range = 0.02
layer_norm_eps = 1e-05
use_cache = True
bos_token_id = 31996
eos_token_id = 31999
attention_dropout = 0.1
hidden_dropout = 0.0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32000) —
Vocabulary size of the GPTNeoXJapanese model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling GPTNeoXJapanese.
hidden_size (int, optional, defaults to 2560) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 32) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_multiple_size (int, optional, defaults to 4) —
Dimension of the “intermediate” layer in the Transformer encoder is calculated by hidden_size *
intermediate_multiple_size.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler.
rotary_pct (float, optional, defaults to 1.00) —
percentage of hidden dimensions to allocate to rotary embeddings
rotary_emb_base (int, optional, defaults to 10000) —
base for computing rotary embeddings frequency
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
hidden_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the hidden layer.
Example —
This is the configuration class to store the configuration of a GPTNeoXModelJapanese. It is used to instantiate
a GPTNeoX model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the GPTNeoXJapanese
abeja/gpt-neox-japanese-2.7b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information. Default configs is set as 2.7B model
Copied
from transformers import GPTNeoXJapaneseConfig, GPTNeoXJapaneseModel
# Initializing a GPTNeoXJapanese gpt-neox-japanese-2.7b style configuration
configuration = GPTNeoXJapaneseConfig()
# Initializing a model (with random weights) from the gpt-neox-japanese-2.7b style configuration
model = GPTNeoXJapaneseModel(configuration)
# Accessing the model configuration
configuration = model.config
GPTNeoXJapaneseTokenizer
class transformers.GPTNeoXJapaneseTokenizer
<
source
>
(
vocab_file
emoji_file
unk_token = '<|endoftext|>'
pad_token = '<|endoftext|>'
bos_token = '<|startoftext|>'
eos_token = '<|endoftext|>'
do_clean_text = False
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
emoji_file (str) —
File containing the emoji.
unk_token (str, optional, defaults to "<|endoftext|>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<|endoftext|>") —
The token used for padding
bos_token (str, optional, defaults to "<|startoftext|>") —
The beginning of sequence token.
eos_token (str, optional, defaults to "<|endoftext|>") —
The end of sequence token.
do_clean_text (bool, optional, defaults to False) —
Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE.
This tokenizer inherits from PreTrainedTokenizer and is based on Japanese special Sub-Word-Encoding that is
used in this repository (https://github.com/tanreinama/Japanese-BPEEncoder_V2). Check the repository for details.
Japanese has a relatively large vocabulary and there is no separation between words. Furthermore, the language is a
combination of hiragana, katakana, and kanji, and variants such as “1” and “①” are often used. In order to cope
with these, this tokenizer has the following features
Subword-by-subword segmentation, which is intermediate between byte strings and morphological analysis.
BPEs are created for each Kanji, Hiragana, and Katakana character, and there are no BPEs that cross character
types, such as Kanji + Hiragana or Hiragana + Katakana.
All-byte encoding that does not require <unk>.
Independent of UTF codes such as 2-byte and 3-byte characters
Conversion of heterographs to the same token_id
Emoji and Emoticon are grouped into 12 types as special tags.
Example:
Copied
from transformers import GPTNeoXJapaneseTokenizer
tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
# You can confirm both 慶応 and 慶應 are encoded to 17749
tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]
[30014, 26883, 26638, 27228, 25, 26650, 31732, 31679, 27809, 26638, 17749, 31592, 17749, 31593, 321, 1281]
# Both 慶応 and 慶應 are decoded to 慶応
tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"])
'吾輩は猫である🐯。実は慶応(慶応)大学出身'
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
GPTNeoXJapaneseModel
class transformers.GPTNeoXJapaneseModel
<
source
>
(
config
)
Parameters
config (~GPTNeoXJapaneseConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPTNeoXJapanese Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXJapaneseConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXJapaneseModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoXJapaneseModel
import torch
tokenizer = AutoTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
model = GPTNeoXJapaneseModel.from_pretrained("abeja/gpt-neox-japanese-2.7b")
inputs = tokenizer("日本語のGPT-neoxがHugging Faceで使えます😀", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
GPTNeoXJapaneseForCausalLM
class transformers.GPTNeoXJapaneseForCausalLM
<
source
>
(
config
)
Parameters
config (~GPTNeoXJapaneseConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
GPTNeoXJapanese Model with a language modeling head on top for Classifier Model fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are
only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks that can be used (see
past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTNeoXJapaneseConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTNeoXJapaneseForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
config = GPTNeoXJapaneseConfig.from_pretrained("abeja/gpt-neox-japanese-2.7b")
config.is_decoder = True
model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b", config=config)
inputs = tokenizer("日本語のGPT-neoxがHugging Faceで使えます😀", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
←GPT NeoX
GPT-J→
GPT-NeoX-Japanese
Overview
Generation
Documentation resources
GPTNeoXJapaneseConfig
GPTNeoXJapaneseTokenizer
GPTNeoXJapaneseModel
GPTNeoXJapaneseForCausalLM
|
CodeGen
Overview
The CodeGen model was proposed in A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.
CodeGen is an autoregressive language model for program synthesis trained sequentially on The Pile, BigQuery, and BigPython.
The abstract from the paper is the following:
Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI’s Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: this https URL.
This model was contributed by Hiroaki Hayashi.
The original code can be found here.
Checkpoint Naming
CodeGen model checkpoints are available on different pre-training data with variable sizes.
The format is: Salesforce/codegen-{size}-{data}, wheresize: 350M, 2B, 6B, 16B
data: nl: Pre-trained on the Pile
multi: Initialized with nl, then further pre-trained on multiple programming languages data
mono: Initialized with multi, then further pre-trained on Python data
For example, Salesforce/codegen-350M-mono offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python.
How to use
Copied
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Salesforce/codegen-350M-mono"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
text = "def hello_world():"
completion = model.generate(**tokenizer(text, return_tensors="pt"))
print(tokenizer.decode(completion[0]))
def hello_world():
print("Hello World")
hello_world()
Documentation resources
Causal language modeling task guide
CodeGenConfig
class transformers.CodeGenConfig
<
source
>
(
vocab_size = 50400
n_positions = 2048
n_ctx = 2048
n_embd = 4096
n_layer = 28
n_head = 16
rotary_dim = 64
n_inner = None
activation_function = 'gelu_new'
resid_pdrop = 0.0
embd_pdrop = 0.0
attn_pdrop = 0.0
layer_norm_epsilon = 1e-05
initializer_range = 0.02
use_cache = True
bos_token_id = 50256
eos_token_id = 50256
tie_word_embeddings = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50400) —
Vocabulary size of the CodeGen model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling CodeGenModel.
n_positions (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 4096) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 28) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
rotary_dim (int, optional, defaults to 64) —
Number of dimensions in the embedding that Rotary Position Embedding is applied to.
n_inner (int, optional, defaults to None) —
Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd
activation_function (str, optional, defaults to "gelu_new") —
Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"].
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (int, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a CodeGenModel. It is used to instantiate a
CodeGen model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the CodeGen
Salesforce/codegen-2B-mono architecture. Configuration objects
inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from
PretrainedConfig for more information.
Example:
Copied
from transformers import CodeGenConfig, CodeGenModel
# Initializing a CodeGen 6B configuration
configuration = CodeGenConfig()
# Initializing a model (with random weights) from the configuration
model = CodeGenModel(configuration)
# Accessing the model configuration
configuration = model.config
CodeGenTokenizer
class transformers.CodeGenTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
pad_token = None
add_prefix_space = False
add_bos_token = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|endoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (CodeGen tokenizer detect beginning of words by the preceding space).
Construct a CodeGen tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import CodeGenTokenizer
tokenizer = CodeGenTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
tokenizer("Hello world")["input_ids"]
[15496, 995]
tokenizer(" Hello world")["input_ids"]
[18435, 995]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
CodeGenTokenizerFast
class transformers.CodeGenTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|endoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (CodeGen tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether or not the post-processing step should trim offsets to avoid including whitespaces.
Construct a “fast” CodeGen tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import CodeGenTokenizerFast
tokenizer = CodeGenTokenizerFast.from_pretrained("Salesforce/codegen-350M-mono")
tokenizer("Hello world")["input_ids"]
[15496, 995]
tokenizer(" Hello world")["input_ids"]
[18435, 995]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
decode
<
source
>
(
token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
truncate_before_pattern: typing.Optional[typing.List[str]] = None
**kwargs
)
→
str
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces. If None, will default to
self.clean_up_tokenization_spaces (available in the tokenizer_config).
truncate_before_pattern (List[str], optional, defaults to None) —
A list of regular expression strings that will be used to truncate the returned string. This can be
used to remove extra pieces of code (e.g. truncate if observing a comment symbol ”#” at the beginning
of a new line). An example pattern could be `[”^#”, re.escape(”<|endoftext|>”), ”^'''”, ”
Returns
str
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
”]`.
kwargs (additional keyword arguments, optional):
Will be passed to the underlying model specific decode method.
CodeGenModel
class transformers.CodeGenModel
<
source
>
(
config
)
Parameters
config (CodeGenConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare CodeGen Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoProcenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CodeGenConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CodeGenModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CodeGenModel
import torch
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
model = CodeGenModel.from_pretrained("Salesforce/codegen-2B-mono")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
CodeGenForCausalLM
class transformers.CodeGenForCausalLM
<
source
>
(
config
)
Parameters
config (CodeGenConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The CodeGen Model transformer with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoProcenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CodeGenConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CodeGenForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, CodeGenForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono")
model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-2B-mono")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
←CANINE
ConvBERT→
CodeGen
Overview
Checkpoint Naming
How to use
Documentation resources
CodeGenConfig
CodeGenTokenizer
CodeGenTokenizerFast
CodeGenModel
CodeGenForCausalLM
|
Big Transfer (BiT)
Overview
The BiT model was proposed in Big Transfer (BiT): General Visual Representation Learning by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
BiT is a simple recipe for scaling up pre-training of ResNet-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning.
The abstract from the paper is the following:
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes — from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
Tips:
BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by group normalization,
2) weight standardization is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant
impact on transfer learning.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT.
Image Classification
BitForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
BitConfig
class transformers.BitConfig
<
source
>
(
num_channels = 3
embedding_size = 64
hidden_sizes = [256, 512, 1024, 2048]
depths = [3, 4, 6, 3]
layer_type = 'preactivation'
hidden_act = 'relu'
global_padding = None
num_groups = 32
drop_path_rate = 0.0
embedding_dynamic_padding = False
output_stride = 32
width_factor = 1
out_features = None
out_indices = None
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
embedding_size (int, optional, defaults to 64) —
Dimensionality (hidden size) for the embedding layer.
hidden_sizes (List[int], optional, defaults to [256, 512, 1024, 2048]) —
Dimensionality (hidden size) at each stage.
depths (List[int], optional, defaults to [3, 4, 6, 3]) —
Depth (number of layers) for each stage.
layer_type (str, optional, defaults to "preactivation") —
The layer to use, it can be either "preactivation" or "bottleneck".
hidden_act (str, optional, defaults to "relu") —
The non-linear activation function in each block. If string, "gelu", "relu", "selu" and "gelu_new"
are supported.
global_padding (str, optional) —
Padding strategy to use for the convolutional layers. Can be either "valid", "same", or None.
num_groups (int, optional, defaults to 32) —
Number of groups used for the BitGroupNormActivation layers.
drop_path_rate (float, optional, defaults to 0.0) —
The drop path rate for the stochastic depth.
embedding_dynamic_padding (bool, optional, defaults to False) —
Whether or not to make use of dynamic padding for the embedding layer.
output_stride (int, optional, defaults to 32) —
The output stride of the model.
width_factor (int, optional, defaults to 1) —
The width factor for the model.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a BitModel. It is used to instantiate an BiT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the BiT
google/bit-50 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BitConfig, BitModel
# Initializing a BiT bit-50 style configuration
configuration = BitConfig()
# Initializing a model (with random weights) from the bit-50 style configuration
model = BitModel(configuration)
# Accessing the model configuration
configuration = model.config
BitImageProcessor
class transformers.BitImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the
preprocess method.
crop_size (Dict[str, int] optional, defaults to 224) —
Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess
method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in
the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess
method.
do_normalize —
Whether to normalize the image. Can be overridden by do_normalize in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Image standard deviation.
do_convert_rgb (bool, optional, defaults to True) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a BiT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: int = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use for normalization. Only has an effect if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use for normalization. Only has an effect if do_normalize is set to
True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: defaults to the channel dimension format of the input image.
Preprocess an image or batch of images.
BitModel
class transformers.BitModel
<
source
>
(
config
)
Parameters
config (BitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BiT model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: Tensor
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See BitImageProcessor.call()
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The BitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, BitModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/bit-50")
model = BitModel.from_pretrained("google/bit-50")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 2048, 7, 7]
BitForImageClassification
class transformers.BitForImageClassification
<
source
>
(
config
)
Parameters
config (BitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BiT Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See BitImageProcessor.call()
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The BitForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, BitForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("google/bit-50")
model = BitForImageClassification.from_pretrained("google/bit-50")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tiger cat
←BEiT
Conditional DETR→
Big Transfer (BiT)
Overview
Resources
BitConfig
BitImageProcessor
BitModel
BitForImageClassification
|
UniSpeech
Overview
The UniSpeech model was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael
Zeng, Xuedong Huang .
The abstract from the paper is the following:
In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both
unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive
self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture
information more correlated with phonetic structures and improve the generalization across languages and domains. We
evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The
results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech
recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all
testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task,
i.e., a relative word error rate reduction of 6% against the previous approach.
Tips:
UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please
use Wav2Vec2Processor for the feature extraction.
UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using Wav2Vec2CTCTokenizer.
This model was contributed by patrickvonplaten. The Authors’ code can be
found here.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
UniSpeechConfig
class transformers.UniSpeechConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
feat_quantizer_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
do_stable_layer_norm = False
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
num_codevectors_per_group = 320
num_codevector_groups = 2
contrastive_logits_temperature = 0.1
num_negatives = 100
codevector_dim = 256
proj_codevector_dim = 256
diversity_loss_weight = 0.1
ctc_loss_reduction = 'mean'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
num_ctc_classes = 80
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
replace_prob = 0.5
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the UniSpeech model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling UniSpeechModel. Vocabulary size of the model. Defines the
different tokens that can be represented by the inputs_ids passed to the forward method of
UniSpeechModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of UniSpeechForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for quantized feature encoder states.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (bool, optional, defaults to False) —
Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
num_codevectors_per_group (int, optional, defaults to 320) —
Number of entries in each quantization codebook (group).
num_codevector_groups (int, optional, defaults to 2) —
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (float, optional, defaults to 0.1) —
The temperature kappa in the contrastive loss.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.
num_negatives (int, optional, defaults to 100) —
Number of negative samples for the contrastive loss.
codevector_dim (int, optional, defaults to 256) —
Dimensionality of the quantized feature vectors.
proj_codevector_dim (int, optional, defaults to 256) —
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (int, optional, defaults to 0.1) —
The weight of the codebook diversity loss component.
ctc_loss_reduction (str, optional, defaults to "mean") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of UniSpeechForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of UniSpeechForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of UniSpeechForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
replace_prob (float, optional, defaults to 0.5) —
Propability that transformer feature is replaced by quantized feature for pretraining.
This is the configuration class to store the configuration of a UniSpeechModel. It is used to instantiate an
UniSpeech model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the UniSpeech
microsoft/unispeech-large-1500h-cv architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import UniSpeechConfig, UniSpeechModel
# Initializing a UniSpeech facebook/unispeech-base-960h style configuration
configuration = UniSpeechConfig()
# Initializing a model (with random weights) from the facebook/unispeech-base-960h style configuration
model = UniSpeechModel(configuration)
# Accessing the model configuration
configuration = model.config
UniSpeech specific outputs
class transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
projected_states: FloatTensor = None
projected_quantized_states: FloatTensor = None
codevector_perplexity: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when model is in train mode, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of UniSpeechForPreTrainingOutput, with potential hidden states and attentions.
UniSpeechModel
class transformers.UniSpeechModel
<
source
>
(
config: UniSpeechConfig
)
Parameters
config (UniSpeechConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare UniSpeech Model transformer outputting raw hidden-states without any specific head on top.
UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, UniSpeechModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("patrickvonplaten/unispeech-large-1500h-cv-timit")
model = UniSpeechModel.from_pretrained("patrickvonplaten/unispeech-large-1500h-cv-timit")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 1024]
UniSpeechForCTC
class transformers.UniSpeechForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (UniSpeechConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeech Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, UniSpeechForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("patrickvonplaten/unispeech-large-1500h-cv-timit")
model = UniSpeechForCTC.from_pretrained("patrickvonplaten/unispeech-large-1500h-cv-timit")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'mister quilter is the apposl of the midle classes and weare glad to welcom his gosepl'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
17.17
UniSpeechForSequenceClassification
class transformers.UniSpeechForSequenceClassification
<
source
>
(
config
)
Parameters
config (UniSpeechConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeech Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.
UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, UniSpeechForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("patrickvonplaten/unispeech-large-1500h-cv-timit")
model = UniSpeechForSequenceClassification.from_pretrained("patrickvonplaten/unispeech-large-1500h-cv-timit")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
UniSpeechForPreTraining
class transformers.UniSpeechForPreTraining
<
source
>
(
config: UniSpeechConfig
)
Parameters
config (UniSpeechConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeech Model with a vector-quantization module and ctc loss for pre-training.
UniSpeech was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled
Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei,
Michael Zeng, Xuedong Huang.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
mask_time_indices (torch.BoolTensor of shape (batch_size, sequence_length), optional) —
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in config.proj_codevector_dim space.
sampled_negative_indices (torch.BoolTensor of shape (batch_size, sequence_length, num_negatives), optional) —
Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss.
Required input for pre-training.
Returns
transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechConfig) and inputs.
loss (optional, returned when model is in train mode, torch.FloatTensor of shape (1,)) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoFeatureExtractor, UniSpeechForPreTraining
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-large-1500h-cv")
model = UniSpeechForPreTraining.from_pretrained("microsoft/unispeech-large-1500h-cv")
# TODO: Add full pretraining example
←SpeechT5
UniSpeech-SAT→
UniSpeech
Overview
Documentation resources
UniSpeechConfig
UniSpeech specific outputs
UniSpeechModel
UniSpeechForCTC
UniSpeechForSequenceClassification
UniSpeechForPreTraining
|
I-BERT
Overview
The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by
Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It’s a quantized version of RoBERTa running
inference up to four times faster.
The abstract from the paper is the following:
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language
Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for
efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this,
previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot
efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM
processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes
the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for
nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT
inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using
RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to
the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for
INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has
been open-sourced.
This model was contributed by kssteven. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
IBertConfig
class transformers.IBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
quant_mode = False
force_dequant = 'none'
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the I-BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling IBertModel
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling IBertModel
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
quant_mode (bool, optional, defaults to False) —
Whether to quantize the model or not.
force_dequant (str, optional, defaults to "none") —
Force dequantize specific nonlinear layer. Dequatized layers are then executed with full precision.
"none", "gelu", "softmax", "layernorm" and "nonlinear" are supported. As deafult, it is set as
"none", which does not dequantize any layers. Please specify "gelu", "softmax", or "layernorm" to
dequantize GELU, Softmax, or LayerNorm, respectively. "nonlinear" will dequantize all nonlinear layers,
i.e., GELU, Softmax, and LayerNorm.
This is the configuration class to store the configuration of a IBertModel. It is used to instantiate a I-BERT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the IBERT
kssteven/ibert-roberta-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
IBertModel
class transformers.IBertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (IBertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare I-BERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (IBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The IBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, IBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertModel.from_pretrained("kssteven/ibert-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
IBertForMaskedLM
class transformers.IBertForMaskedLM
<
source
>
(
config
)
Parameters
config (IBertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (IBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The IBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, IBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertForMaskedLM.from_pretrained("kssteven/ibert-roberta-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
IBertForSequenceClassification
class transformers.IBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (IBertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (IBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The IBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, IBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, IBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = IBertForSequenceClassification.from_pretrained(
... "kssteven/ibert-roberta-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
IBertForMultipleChoice
class transformers.IBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (IBertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (IBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The IBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, IBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertForMultipleChoice.from_pretrained("kssteven/ibert-roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
IBertForTokenClassification
class transformers.IBertForTokenClassification
<
source
>
(
config
)
Parameters
config (IBertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (IBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The IBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, IBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertForTokenClassification.from_pretrained("kssteven/ibert-roberta-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
IBertForQuestionAnswering
class transformers.IBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (IBertConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (IBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The IBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, IBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base")
model = IBertForQuestionAnswering.from_pretrained("kssteven/ibert-roberta-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←HerBERT
Jukebox→
I-BERT
Overview
Documentation resources
IBertConfig
IBertModel
IBertForMaskedLM
IBertForSequenceClassification
IBertForMultipleChoice
IBertForTokenClassification
IBertForQuestionAnswering
|
DistilBERT
Overview
The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a
distilled version of BERT, and the paper DistilBERT, a
distilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a
small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than
bert-base-uncased, runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language
understanding benchmark.
The abstract from the paper is the following:
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP),
operating these large models in on-the-edge and/or under constrained computational training or inference budgets
remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation
model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger
counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage
knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by
40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive
biases learned by larger models during pretraining, we introduce a triple loss combining language modeling,
distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we
demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device
study.
Tips:
DistilBERT doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or [SEP]).
DistilBERT doesn’t have options to select the input positions (position_ids input). This could be added if
necessary though, just let us know if you need this option.
Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of:
finding the same probabilities as the teacher model
predicting the masked tokens correctly (but no next-sentence objective)
a cosine similarity between the hidden states of the student and the teacher model
This model was contributed by victorsanh. This model jax version was
contributed by kamalkraj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DistilBERT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on Getting Started with Sentiment Analysis using Python with DistilBERT.
A blog post on how to train DistilBERT with Blurr for sequence classification.
A blog post on how to use Ray to tune DistilBERT hyperparameters.
A blog post on how to train DistilBERT with Hugging Face and Amazon SageMaker.
A notebook on how to finetune DistilBERT for multi-label classification. 🌎
A notebook on how to finetune DistilBERT for multiclass classification with PyTorch. 🌎
A notebook on how to finetune DistilBERT for text classification in TensorFlow. 🌎
DistilBertForSequenceClassification is supported by this example script and notebook.
TFDistilBertForSequenceClassification is supported by this example script and notebook.
FlaxDistilBertForSequenceClassification is supported by this example script and notebook.
Text classification task guide
Token Classification
DistilBertForTokenClassification is supported by this example script and notebook.
TFDistilBertForTokenClassification is supported by this example script and notebook.
FlaxDistilBertForTokenClassification is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide
Fill-Mask
DistilBertForMaskedLM is supported by this example script and notebook.
TFDistilBertForMaskedLM is supported by this example script and notebook.
FlaxDistilBertForMaskedLM is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Question Answering
DistilBertForQuestionAnswering is supported by this example script and notebook.
TFDistilBertForQuestionAnswering is supported by this example script and notebook.
FlaxDistilBertForQuestionAnswering is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice
DistilBertForMultipleChoice is supported by this example script and notebook.
TFDistilBertForMultipleChoice is supported by this example script and notebook.
Multiple choice task guide
⚗️ Optimization
A blog post on how to quantize DistilBERT with 🤗 Optimum and Intel.
A blog post on how Optimizing Transformers for GPUs with 🤗 Optimum.
A blog post on Optimizing Transformers with Hugging Face Optimum.
⚡️ Inference
A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia with DistilBERT.
A blog post on Serverless Inference with Hugging Face’s Transformers, DistilBERT and Amazon SageMaker.
🚀 Deploy
A blog post on how to deploy DistilBERT on Google Cloud.
A blog post on how to deploy DistilBERT with Amazon SageMaker.
A blog post on how to Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module.
DistilBertConfig
class transformers.DistilBertConfig
<
source
>
(
vocab_size = 30522
max_position_embeddings = 512
sinusoidal_pos_embds = False
n_layers = 6
n_heads = 12
dim = 768
hidden_dim = 3072
dropout = 0.1
attention_dropout = 0.1
activation = 'gelu'
initializer_range = 0.02
qa_dropout = 0.1
seq_classif_dropout = 0.2
pad_token_id = 0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the DistilBERT model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling DistilBertModel or TFDistilBertModel.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
sinusoidal_pos_embds (boolean, optional, defaults to False) —
Whether to use sinusoidal positional embeddings.
n_layers (int, optional, defaults to 6) —
Number of hidden layers in the Transformer encoder.
n_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
dim (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
hidden_dim (int, optional, defaults to 3072) —
The size of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
activation (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qa_dropout (float, optional, defaults to 0.1) —
The dropout probabilities used in the question answering model DistilBertForQuestionAnswering.
seq_classif_dropout (float, optional, defaults to 0.2) —
The dropout probabilities used in the sequence classification and the multiple choice model
DistilBertForSequenceClassification.
This is the configuration class to store the configuration of a DistilBertModel or a TFDistilBertModel. It
is used to instantiate a DistilBERT model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the DistilBERT
distilbert-base-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import DistilBertConfig, DistilBertModel
# Initializing a DistilBERT configuration
configuration = DistilBertConfig()
# Initializing a model (with random weights) from the configuration
model = DistilBertModel(configuration)
# Accessing the model configuration
configuration = model.config
DistilBertTokenizer
class transformers.DistilBertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a DistilBERT tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
DistilBertTokenizerFast
class transformers.DistilBertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” DistilBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
DistilBertModel
class transformers.DistilBertModel
<
source
>
(
config: PretrainedConfig
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DistilBERT encoder/transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DistilBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DistilBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
DistilBertForMaskedLM
class transformers.DistilBertForMaskedLM
<
source
>
(
config: PretrainedConfig
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a masked language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DistilBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DistilBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForMaskedLM.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
DistilBertForSequenceClassification
class transformers.DistilBertForSequenceClassification
<
source
>
(
config: PretrainedConfig
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DistilBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, DistilBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, DistilBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = DistilBertForSequenceClassification.from_pretrained(
... "distilbert-base-uncased", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
DistilBertForMultipleChoice
class transformers.DistilBertForMultipleChoice
<
source
>
(
config: PretrainedConfig
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DistilBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, DistilBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
model = DistilBertForMultipleChoice.from_pretrained("distilbert-base-cased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([[prompt, choice0], [prompt, choice1]], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
DistilBertForTokenClassification
class transformers.DistilBertForTokenClassification
<
source
>
(
config: PretrainedConfig
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DistilBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DistilBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForTokenClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
DistilBertForQuestionAnswering
class transformers.DistilBertForQuestionAnswering
<
source
>
(
config: PretrainedConfig
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DistilBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DistilBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFDistilBertModel
class transformers.TFDistilBertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DistilBERT encoder/transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DistilBertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDistilBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDistilBertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFDistilBertForMaskedLM
class transformers.TFDistilBertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a masked language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DistilBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDistilBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDistilBertForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFDistilBertForMaskedLM.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFDistilBertForSequenceClassification
class transformers.TFDistilBertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DistilBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDistilBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDistilBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFDistilBertForMultipleChoice
class transformers.TFDistilBertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DistilBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDistilBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDistilBertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFDistilBertForMultipleChoice.from_pretrained("distilbert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFDistilBertForTokenClassification
class transformers.TFDistilBertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DistilBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDistilBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDistilBertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFDistilBertForTokenClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFDistilBertForQuestionAnswering
class transformers.TFDistilBertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DistilBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDistilBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDistilBertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
FlaxDistilBertModel
class transformers.FlaxDistilBertModel
<
source
>
(
config: DistilBertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DistilBert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxDistilBertModel
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = FlaxDistilBertModel.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxDistilBertForMaskedLM
class transformers.FlaxDistilBertForMaskedLM
<
source
>
(
config: DistilBertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxDistilBertForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = FlaxDistilBertForMaskedLM.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxDistilBertForSequenceClassification
class transformers.FlaxDistilBertForSequenceClassification
<
source
>
(
config: DistilBertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxDistilBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = FlaxDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxDistilBertForMultipleChoice
class transformers.FlaxDistilBertForMultipleChoice
<
source
>
(
config: DistilBertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and
a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxDistilBertForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = FlaxDistilBertForMultipleChoice.from_pretrained("distilbert-base-uncased")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxDistilBertForTokenClassification
class transformers.FlaxDistilBertForTokenClassification
<
source
>
(
config: DistilBertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxDistilBertForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = FlaxDistilBertForTokenClassification.from_pretrained("distilbert-base-uncased")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxDistilBertForQuestionAnswering
class transformers.FlaxDistilBertForQuestionAnswering
<
source
>
(
config: DistilBertConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (DistilBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DistilBertConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxDistilBertForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = FlaxDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←DialoGPT
DPR→
DistilBERT
Overview
Resources
DistilBertConfig
DistilBertTokenizer
DistilBertTokenizerFast
DistilBertModel
DistilBertForMaskedLM
DistilBertForSequenceClassification
DistilBertForMultipleChoice
DistilBertForTokenClassification
DistilBertForQuestionAnswering
TFDistilBertModel
TFDistilBertForMaskedLM
TFDistilBertForSequenceClassification
TFDistilBertForMultipleChoice
TFDistilBertForTokenClassification
TFDistilBertForQuestionAnswering
FlaxDistilBertModel
FlaxDistilBertForMaskedLM
FlaxDistilBertForSequenceClassification
FlaxDistilBertForMultipleChoice
FlaxDistilBertForTokenClassification
FlaxDistilBertForQuestionAnswering
|
UPerNet
Overview
The UPerNet model was proposed in Unified Perceptual Parsing for Scene Understanding
by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment
a wide range of concepts from images, leveraging any vision backbone like ConvNeXt or Swin.
The abstract from the paper is the following:
Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes.
UPerNet framework. Taken from the original paper.
This model was contributed by nielsr. The original code is based on OpenMMLab’s mmsegmentation here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet.
Demo notebooks for UPerNet can be found here.
UperNetForSemanticSegmentation is supported by this example script and notebook.
See also: Semantic segmentation task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Usage
UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so:
Copied
from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
To use another vision backbone, like ConvNeXt, simply instantiate the model with the appropriate backbone:
Copied
from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation
backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"])
config = UperNetConfig(backbone_config=backbone_config)
model = UperNetForSemanticSegmentation(config)
Note that this will randomly initialize all the weights of the model.
UperNetConfig
class transformers.UperNetConfig
<
source
>
(
backbone_config = None
hidden_size = 512
initializer_range = 0.02
pool_scales = [1, 2, 3, 6]
use_auxiliary_head = True
auxiliary_loss_weight = 0.4
auxiliary_in_channels = 384
auxiliary_channels = 256
auxiliary_num_convs = 1
auxiliary_concat_input = False
loss_ignore_index = 255
**kwargs
)
Parameters
backbone_config (PretrainedConfig or dict, optional, defaults to ResNetConfig()) —
The configuration of the backbone model.
hidden_size (int, optional, defaults to 512) —
The number of hidden units in the convolutional layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
pool_scales (Tuple[int], optional, defaults to [1, 2, 3, 6]) —
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
use_auxiliary_head (bool, optional, defaults to True) —
Whether to use an auxiliary head during training.
auxiliary_loss_weight (float, optional, defaults to 0.4) —
Weight of the cross-entropy loss of the auxiliary head.
auxiliary_channels (int, optional, defaults to 256) —
Number of channels to use in the auxiliary head.
auxiliary_num_convs (int, optional, defaults to 1) —
Number of convolutional layers to use in the auxiliary head.
auxiliary_concat_input (bool, optional, defaults to False) —
Whether to concatenate the output of the auxiliary head with the input before the classification layer.
loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function.
This is the configuration class to store the configuration of an UperNetForSemanticSegmentation. It is used to
instantiate an UperNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the UperNet
openmmlab/upernet-convnext-tiny architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import UperNetConfig, UperNetForSemanticSegmentation
# Initializing a configuration
configuration = UperNetConfig()
# Initializing a model (with random weights) from the configuration
model = UperNetForSemanticSegmentation(configuration)
# Accessing the model configuration
configuration = model.config
to_dict
<
source
>
(
)
Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:
Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,
UperNetForSemanticSegmentation
class transformers.UperNetForSemanticSegmentation
<
source
>
(
config
)
Parameters
This model is a PyTorch [torch.nn.Module](https —//pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and —
behavior. —
config (UperNetConfig): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UperNet framework leveraging any vision backbone e.g. for ADE20k, CityScapes.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See SegformerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers in case the backbone has them. See
attentions under returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers of the backbone. See hidden_states under
returned tensors for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UperNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UperNetForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
from PIL import Image
from huggingface_hub import hf_hub_download
image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-tiny")
model = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-tiny")
filepath = hf_hub_download(
... repo_id="hf-internal-testing/fixtures_ade20k", filename="ADE_val_00000001.jpg", repo_type="dataset"
... )
image = Image.open(filepath).convert("RGB")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height, width)
list(logits.shape)
[1, 150, 512, 512]
←TimeSformer
VAN→
UPerNet
Overview
Resources
Usage
UperNetConfig
UperNetForSemanticSegmentation
|
MMS
Overview
The MMS model was proposed in Scaling Speech Technology to 1,000+ Languages
by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli
The abstract from the paper is the following:
Expanding the language coverage of speech technology has the potential to improve access to information for many more people.
However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000
languages spoken around the world.
The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task.
The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging
self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages,
a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models
for the same number of languages, as well as a language identification model for 4,017 languages.
Experiments show that our multilingual speech recognition model more than halves the word error rate of
Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.
Here are the different models open sourced in the MMS project. The models and code are originally released here. We have add them to the transformers framework, making them easier to use.
Automatic Speech Recognition (ASR)
The ASR model checkpoints can be found here : mms-1b-fl102, mms-1b-l1107, mms-1b-all. For best accuracy, use the mms-1b-all model.
Tips:
All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with Wav2Vec2FeatureExtractor.
The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using
Wav2Vec2CTCTokenizer.
You can load different language adapter weights for different languages via load_adapter(). Language adapters only consists of roughly 2 million parameters
and can therefore be efficiently loaded on the fly when needed.
Loading
By default MMS loads adapter weights for English. If you want to load adapter weights of another language
make sure to specify target_lang=<your-chosen-target-lang> as well as "ignore_mismatched_sizes=True.
The ignore_mismatched_sizes=True keyword has to be passed to allow the language model head to be resized according
to the vocabulary of the specified language.
Similarly, the processor should be loaded with the same target language
Copied
from transformers import Wav2Vec2ForCTC, AutoProcessor
model_id = "facebook/mms-1b-all"
target_lang = "fra"
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)
You can safely ignore a warning such as:
Copied
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/mms-1b-all and are newly initialized because the shapes did not match:
- lm_head.bias: found shape torch.Size([154]) in the checkpoint and torch.Size([314]) in the model instantiated
- lm_head.weight: found shape torch.Size([154, 1280]) in the checkpoint and torch.Size([314, 1280]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
If you want to use the ASR pipeline, you can load your chosen target language as such:
Copied
from transformers import pipeline
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": "fra", "ignore_mismatched_sizes": True})
Inference
Next, let’s look at how we can run MMS in inference and change adapter layers after having called ~PretrainedModel.from_pretrained
First, we load audio data in different languages using the Datasets.
Copied
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# French
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
fr_sample = next(iter(stream_data))["audio"]["array"]
Next, we load the model and processor
Copied
from transformers import Wav2Vec2ForCTC, AutoProcessor
import torch
model_id = "facebook/mms-1b-all"
processor = AutoProcessor.from_pretrained(model_id)
model = Wav2Vec2ForCTC.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model and transcribe the model output,
just like we usually do for Wav2Vec2ForCTC.
Copied
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
# 'joe keton disapproved of films and buster also had reservations about the media'
We can now keep the same model in memory and simply switch out the language adapters by
calling the convenient load_adapter() function for the model and set_target_lang() for the tokenizer.
We pass the target language as an input - "fra" for French.
Copied
processor.tokenizer.set_target_lang("fra")
model.load_adapter("fra")
inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
# "ce dernier est volé tout au long de l'histoire romaine"
In the same way the language can be switched out for all other supported languages. Please have a look at:
Copied
processor.tokenizer.vocab.keys()
to see all supported languages.
To further improve performance from ASR models, language model decoding can be used. See the documentation here for further details.
Speech Synthesis (TTS)
Individual TTS models are available for each of the 1100+ languages. The models and inference documentation can be found here.
Language Identification (LID)
Different LID models are available based on the number of languages they can recognize - 126, 256, 512, 1024, 2048, 4017.
Inference
First, we install transformers and some other libraries
```
pip install torch accelerate torchaudio datasets
pip install --upgrade transformers
````
pip install torch datasets[audio]
Next, we load a couple of audio samples via `datasets`. Make sure that the audio data is sampled to 16000 kHz.
Copied
from datasets import load_dataset, Audio
# English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
# Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
Next, we load the model and processor
Copied
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-126"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
Copied
# English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'eng'
# Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
# 'ara'
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
Copied
processor.id2label.values()
Audio Pretrained Models
Pretrained models are available for two different sizes - 300M , 1Bil. The architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2’s documentation page for further details on how to finetune with models for various downstream tasks.
←MCTCT
MusicGen→
MMS
Overview
Automatic Speech Recognition (ASR)
LoadingInferenceSpeech Synthesis (TTS)
Language Identification (LID)
InferenceAudio Pretrained Models
|
The documentation page MODEL_DOC/SEW_D doesn’t exist in v4.31.0, but exists on the main version. Click here to redirect to the main version of the documentation. |
Open-Llama
Overview
The Open-Llama model was proposed in Open-Llama project by community developer s-JoL.
The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM.
And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.
This model was contributed by s-JoL.
The original code can be found Open-Llama.
Checkpoint and usage can be found at s-JoL/Open-Llama-V1.
OpenLlamaConfig
class transformers.OpenLlamaConfig
<
source
>
(
vocab_size = 100000
hidden_size = 4096
intermediate_size = 11008
num_hidden_layers = 32
num_attention_heads = 32
hidden_act = 'silu'
max_position_embeddings = 2048
initializer_range = 0.02
rms_norm_eps = 1e-06
use_cache = True
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
tie_word_embeddings = False
use_memory_efficient_attention = True
hidden_dropout_prob = 0.1
attention_dropout_prob = 0.1
use_stable_embedding = True
shared_input_output_embedding = True
rope_scaling = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32000) —
Vocabulary size of the Open-Llama model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling OpenLlamaModel
hidden_size (int, optional, defaults to 4096) —
Dimension of the hidden representations.
intermediate_size (int, optional, defaults to 11008) —
Dimension of the MLP representations.
num_hidden_layers (int, optional, defaults to 32) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "silu") —
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the rms normalization layers.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
tie_word_embeddings(bool, optional, defaults to False) —
Whether to tie weight embeddings
rope_scaling (Dict, optional) —
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports three scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is {"type": strategy name, "factor": scaling factor}. When using this flag, don’t update
max_position_embeddings to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
Example —
This is the configuration class to store the configuration of a OpenLlamaModel. It is used to instantiate an
Open-Llama model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
s-JoL/Open-Llama-V1.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import OpenLlamaModel, OpenLlamaConfig
# Initializing a Open-Llama open_llama-7b style configuration
configuration = OpenLlamaConfig()
# Initializing a model from the open_llama-7b style configuration
model = OpenLlamaModel(configuration)
# Accessing the model configuration
configuration = model.config
OpenLlamaModel
class transformers.OpenLlamaModel
<
source
>
(
config: OpenLlamaConfig
)
Parameters
config (OpenLlamaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
config — OpenLlamaConfig
The bare Open-Llama Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Transformer decoder consisting of config.num_hidden_layers layers. Each layer is a OpenLlamaDecoderLayer
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The OpenLlamaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
OpenLlamaForCausalLM
class transformers.OpenLlamaForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Args —
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (OpenLlamaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OpenLlamaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, OpenLlamaForCausalLM
model = OpenLlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
OpenLlamaForSequenceClassification
class transformers.OpenLlamaForSequenceClassification
<
source
>
(
config
)
Parameters
config (OpenLlamaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The LLaMa Model transformer with a sequence classification head on top (linear layer).
OpenLlamaForSequenceClassification uses the last token in order to do the classification, as other causal
models (e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The OpenLlamaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←Nyströmformer
OPT→
Open-Llama
Overview
OpenLlamaConfig
OpenLlamaModel
OpenLlamaForCausalLM
OpenLlamaForSequenceClassification
|
ConvBERT
Overview
The ConvBERT model was proposed in ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng
Yan.
The abstract from the paper is the following:
Pre-trained language models like BERT and its variants have recently achieved impressive performance in various
natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers
large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for
generating the attention map from a global perspective, we observe some heads only need to learn local dependencies,
which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to
replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the
rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context
learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that
ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and
fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while
using less than 1/4 training cost. Code and pre-trained models will be released.
ConvBERT training tips are similar to those of BERT.
This model was contributed by abhishek. The original implementation can be found
here: https://github.com/yitu-opensource/ConvBert
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
ConvBertConfig
class transformers.ConvBertConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
embedding_size = 768
head_ratio = 2
conv_kernel_size = 9
num_groups = 1
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the ConvBERT model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling ConvBertModel or TFConvBertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling ConvBertModel or TFConvBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
head_ratio (int, optional, defaults to 2) —
Ratio gamma to reduce the number of attention heads.
num_groups (int, optional, defaults to 1) —
The number of groups for grouped linear layers for ConvBert model
conv_kernel_size (int, optional, defaults to 9) —
The size of the convolutional kernel.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a ConvBertModel. It is used to instantiate an
ConvBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ConvBERT
YituTech/conv-bert-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ConvBertConfig, ConvBertModel
# Initializing a ConvBERT convbert-base-uncased style configuration
configuration = ConvBertConfig()
# Initializing a model (with random weights) from the convbert-base-uncased style configuration
model = ConvBertModel(configuration)
# Accessing the model configuration
configuration = model.config
ConvBertTokenizer
class transformers.ConvBertTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original ConvBERT).
Construct a ConvBERT tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A ConvBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ConvBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
ConvBertTokenizerFast
class transformers.ConvBertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original ConvBERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” ConvBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A ConvBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ConvBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
ConvBertModel
class transformers.ConvBertModel
<
source
>
(
config
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ConvBERT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
Returns
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The ConvBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ConvBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertModel.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ConvBertForMaskedLM
class transformers.ConvBertForMaskedLM
<
source
>
(
config
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ConvBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ConvBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertForMaskedLM.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
ConvBertForSequenceClassification
class transformers.ConvBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ConvBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, ConvBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, ConvBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ConvBertForSequenceClassification.from_pretrained(
... "YituTech/conv-bert-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
ConvBertForMultipleChoice
class transformers.ConvBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ConvBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ConvBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertForMultipleChoice.from_pretrained("YituTech/conv-bert-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
ConvBertForTokenClassification
class transformers.ConvBertForTokenClassification
<
source
>
(
config
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ConvBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ConvBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertForTokenClassification.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
ConvBertForQuestionAnswering
class transformers.ConvBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ConvBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ConvBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = ConvBertForQuestionAnswering.from_pretrained("YituTech/conv-bert-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFConvBertModel
class transformers.TFConvBertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ConvBERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: Optional[Union[np.array, tf.Tensor]] = None
token_type_ids: Optional[Union[np.array, tf.Tensor]] = None
position_ids: Optional[Union[np.array, tf.Tensor]] = None
head_mask: Optional[Union[np.array, tf.Tensor]] = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvBertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFConvBertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = TFConvBertModel.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFConvBertForMaskedLM
class transformers.TFConvBertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFConvBertForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = TFConvBertForMaskedLM.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFConvBertForSequenceClassification
class transformers.TFConvBertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model transformer with a sequence classification/regression head on top e.g., for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFConvBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = TFConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFConvBertForMultipleChoice
class transformers.TFConvBertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFConvBertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = TFConvBertForMultipleChoice.from_pretrained("YituTech/conv-bert-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFConvBertForTokenClassification
class transformers.TFConvBertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFConvBertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = TFConvBertForTokenClassification.from_pretrained("YituTech/conv-bert-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFConvBertForQuestionAnswering
class transformers.TFConvBertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: tf.Tensor | None = None
end_positions: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFConvBertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base")
model = TFConvBertForQuestionAnswering.from_pretrained("YituTech/conv-bert-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←CodeGen
CPM→
ConvBERT
Overview
Documentation resources
ConvBertConfig
ConvBertTokenizer
ConvBertTokenizerFast
ConvBertModel
ConvBertForMaskedLM
ConvBertForSequenceClassification
ConvBertForMultipleChoice
ConvBertForTokenClassification
ConvBertForQuestionAnswering
TFConvBertModel
TFConvBertForMaskedLM
TFConvBertForSequenceClassification
TFConvBertForMultipleChoice
TFConvBertForTokenClassification
TFConvBertForQuestionAnswering
|
Data2Vec
Overview
The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.
The abstract from the paper is the following:
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and
objectives differ widely because they were developed with a single modality in mind. To get us closer to general
self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data based on a
masked view of the input in a selfdistillation setup using a standard Transformer architecture.
Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which
are local in nature, data2vec predicts contextualized latent representations that contain information from
the entire input. Experiments on the major benchmarks of speech recognition, image classification, and
natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.
Tips:
Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method.
For Data2VecAudio, preprocessing is identical to Wav2Vec2Model, including feature extraction
For Data2VecText, preprocessing is identical to RobertaModel, including tokenization.
For Data2VecVision, preprocessing is identical to BeitModel, including feature extraction.
This model was contributed by edugp and patrickvonplaten.
sayakpaul and Rocketknight1 contributed Data2Vec for vision in TensorFlow.
The original code (for NLP and Speech) can be found here.
The original code for vision can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec.
Image Classification
Data2VecVisionForImageClassification is supported by this example script and notebook.
To fine-tune TFData2VecVisionForImageClassification on a custom dataset, see this notebook.
Data2VecText documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
Data2VecAudio documentation resources
Audio classification task guide
Automatic speech recognition task guide
Data2VecVision documentation resources
Image classification
Semantic segmentation
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Data2VecTextConfig
class transformers.Data2VecTextConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the DATA2VEC model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling Data2VecModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling Data2VecModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a Data2VecTextModel and Data2VecTextModel. It
is used to instantiate a Data2VecText model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Data2VecText
facebook/data2vec-text-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import Data2VecTextConfig, Data2VecTextModel
# Initializing a Data2VecText facebook/data2vec-text-base style configuration
configuration = Data2VecTextConfig()
# Initializing a model (with random weights) from the facebook/data2vec-text-base style configuration
model = Data2VecTextModel(configuration)
# Accessing the model configuration
configuration = model.config
Data2VecAudioConfig
class transformers.Data2VecAudioConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embedding_groups = 16
conv_pos_kernel_size = 19
num_conv_pos_embeddings = 5
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
ctc_loss_reduction = 'sum'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
tdnn_dim = (512, 512, 512, 512, 1500)
tdnn_kernel = (5, 3, 3, 1, 1)
tdnn_dilation = (1, 2, 3, 1, 1)
xvector_output_dim = 512
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
add_adapter = False
adapter_kernel_size = 3
adapter_stride = 2
num_adapter_layers = 3
output_hidden_size = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the Data2VecAudio model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling Data2VecAudioModel or TFData2VecAudioModel. Vocabulary size
of the model. Defines the different tokens that can be represented by the inputs_ids passed to the
forward method of Data2VecAudioModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of Data2VecAudioForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length`. Note that overlap may decrease the
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of Data2VecAudioForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of Data2VecAudioForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of Data2VecAudioForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 1500)) —
A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN
module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.
tdnn_kernel (Tuple[int] or List[int], optional, defaults to (5, 3, 3, 1, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the
XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.
tdnn_dilation (Tuple[int] or List[int], optional, defaults to (1, 2, 3, 1, 1)) —
A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the
XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.
xvector_output_dim (int, optional, defaults to 512) —
Dimensionality of the XVector embedding vectors.
add_adapter (bool, optional, defaults to False) —
Whether a convolutional network should be stacked on top of the Data2VecAudio Encoder. Can be very useful
for warm-starting Data2VecAudio for SpeechEncoderDecoder models.
adapter_kernel_size (int, optional, defaults to 3) —
Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
adapter_stride (int, optional, defaults to 2) —
Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
num_adapter_layers (int, optional, defaults to 3) —
Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True.
output_hidden_size (int, optional) —
Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant
if add_adapter is True.
This is the configuration class to store the configuration of a Data2VecAudioModel. It is used to instantiate
an Data2VecAudio model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Data2VecAudio
facebook/data2vec-audio-base-960h architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Data2VecAudioConfig, Data2VecAudioModel
# Initializing a Data2VecAudio facebook/data2vec-audio-base-960h style configuration
configuration = Data2VecAudioConfig()
# Initializing a model (with random weights) from the facebook/data2vec-audio-base-960h style configuration
model = Data2VecAudioModel(configuration)
# Accessing the model configuration
configuration = model.config
Data2VecVisionConfig
class transformers.Data2VecVisionConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 224
patch_size = 16
num_channels = 3
use_mask_token = False
use_absolute_position_embeddings = False
use_relative_position_bias = False
use_shared_relative_position_bias = False
layer_scale_init_value = 0.1
drop_path_rate = 0.1
use_mean_pooling = True
out_indices = [3, 5, 7, 11]
pool_scales = [1, 2, 3, 6]
use_auxiliary_head = True
auxiliary_loss_weight = 0.4
auxiliary_channels = 256
auxiliary_num_convs = 1
auxiliary_concat_input = False
semantic_loss_ignore_index = 255
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
use_mask_token (bool, optional, defaults to False) —
Whether to use a mask token for masked image modeling.
use_absolute_position_embeddings (bool, optional, defaults to False) —
Whether to use BERT-style absolute position embeddings.
use_relative_position_bias (bool, optional, defaults to False) —
Whether to use T5-style relative position embeddings in the self-attention layers.
use_shared_relative_position_bias (bool, optional, defaults to False) —
Whether to use the same relative position embeddings across all self-attention layers of the Transformer.
layer_scale_init_value (float, optional, defaults to 0.1) —
Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate per sample (when applied in the main path of residual layers).
use_mean_pooling (bool, optional, defaults to True) —
Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the
CLS token, before applying the classification head.
out_indices (List[int], optional, defaults to [3, 5, 7, 11]) —
Indices of the feature maps to use for semantic segmentation.
pool_scales (Tuple[int], optional, defaults to [1, 2, 3, 6]) —
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
use_auxiliary_head (bool, optional, defaults to True) —
Whether to use an auxiliary head during training.
auxiliary_loss_weight (float, optional, defaults to 0.4) —
Weight of the cross-entropy loss of the auxiliary head.
auxiliary_channels (int, optional, defaults to 256) —
Number of channels to use in the auxiliary head.
auxiliary_num_convs (int, optional, defaults to 1) —
Number of convolutional layers to use in the auxiliary head.
auxiliary_concat_input (bool, optional, defaults to False) —
Whether to concatenate the output of the auxiliary head with the input before the classification layer.
semantic_loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a Data2VecVisionModel. It is used to instantiate
an Data2VecVision model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Data2VecVision
facebook/data2vec-vision-base architecture.
Example:
Copied
from transformers import Data2VecVisionConfig, Data2VecVisionModel
# Initializing a Data2VecVision data2vec_vision-base-patch16-224-in22k style configuration
configuration = Data2VecVisionConfig()
# Initializing a model (with random weights) from the data2vec_vision-base-patch16-224-in22k style configuration
model = Data2VecVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
Data2VecAudioModel
class transformers.Data2VecAudioModel
<
source
>
(
config: Data2VecAudioConfig
)
Parameters
config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Data2VecAudio Model transformer outputting raw hidden-states without any specific head on top.
Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
data2vec-audio-base, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecAudioConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecAudioModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Data2VecAudioModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioModel.from_pretrained("facebook/data2vec-audio-base-960h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 768]
Data2VecAudioForAudioFrameClassification
class transformers.Data2VecAudioForAudioFrameClassification
<
source
>
(
config
)
Parameters
config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecAudio Model with a frame classification head on top for tasks like Speaker Diarization.
Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
data2vec-audio-base, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecAudioConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecAudioForAudioFrameClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Data2VecAudioForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioForAudioFrameClassification.from_pretrained("facebook/data2vec-audio-base-960h")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
with torch.no_grad():
... logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
Data2VecAudioForCTC
class transformers.Data2VecAudioForCTC
<
source
>
(
config
)
Parameters
config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecAudio Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
data2vec-audio-base, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecAudioConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecAudioForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Data2VecAudioForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
66.95
Data2VecAudioForSequenceClassification
class transformers.Data2VecAudioForSequenceClassification
<
source
>
(
config
)
Parameters
config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecAudio Model with a sequence classification head on top (a linear layer over the pooled output) for tasks
like SUPERB Keyword Spotting.
Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
data2vec-audio-base, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecAudioConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecAudioForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Data2VecAudioForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioForSequenceClassification.from_pretrained("facebook/data2vec-audio-base-960h")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
Data2VecAudioForXVector
class transformers.Data2VecAudioForXVector
<
source
>
(
config
)
Parameters
config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecAudio Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
data2vec-audio-base, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.XVectorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecAudioConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax.
embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecAudioForXVector forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Data2VecAudioForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h")
model = Data2VecAudioForXVector.from_pretrained("facebook/data2vec-audio-base-960h")
# audio file is decoded on the fly
inputs = feature_extractor(
... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
with torch.no_grad():
... embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.7 # the optimal threshold is dataset-dependent
if similarity < threshold:
... print("Speakers are not the same!")
Data2VecTextModel
class transformers.Data2VecTextModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Data2VecText Model for text transformer outputting raw hidden-states without any specific head on top.
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
.. _Attention is all you need: https://arxiv.org/abs/1706.03762
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The Data2VecTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, Data2VecTextModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextModel.from_pretrained("facebook/data2vec-text-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
Data2VecTextForCausalLM
class transformers.Data2VecTextForCausalLM
<
source
>
(
config
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecText Model with a language modeling head on top for CLM fine-tuning.
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The Data2VecTextForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, Data2VecTextForCausalLM, Data2VecTextConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
config = Data2VecTextConfig.from_pretrained("facebook/data2vec-text-base")
config.is_decoder = True
model = Data2VecTextForCausalLM.from_pretrained("facebook/data2vec-text-base", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
Data2VecTextForMaskedLM
class transformers.Data2VecTextForMaskedLM
<
source
>
(
config
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
data2vec Model with a language modeling head on top.
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecTextForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, Data2VecTextForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextForMaskedLM.from_pretrained("facebook/data2vec-text-base")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
Data2VecTextForSequenceClassification
class transformers.Data2VecTextForSequenceClassification
<
source
>
(
config
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecText Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecTextForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, Data2VecTextForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, Data2VecTextForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = Data2VecTextForSequenceClassification.from_pretrained(
... "facebook/data2vec-text-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
Data2VecTextForMultipleChoice
class transformers.Data2VecTextForMultipleChoice
<
source
>
(
config
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecText Model with a multiple choice classification head on top (a linear layer on top of the pooled output
and a softmax) e.g. for RocStories/SWAG tasks.
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecTextForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, Data2VecTextForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextForMultipleChoice.from_pretrained("facebook/data2vec-text-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
Data2VecTextForTokenClassification
class transformers.Data2VecTextForTokenClassification
<
source
>
(
config
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecText Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
for Named-Entity-Recognition (NER) tasks.
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecTextForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, Data2VecTextForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextForTokenClassification.from_pretrained("facebook/data2vec-text-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
Data2VecTextForQuestionAnswering
class transformers.Data2VecTextForQuestionAnswering
<
source
>
(
config
)
Parameters
config (Data2VecTextConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecText Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
Michael Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecTextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecTextForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, Data2VecTextForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
model = Data2VecTextForQuestionAnswering.from_pretrained("facebook/data2vec-text-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
Data2VecVisionModel
class transformers.Data2VecVisionModel
<
source
>
(
config: Data2VecVisionConfig
add_pooling_layer: bool = False
)
Parameters
config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Data2VecVision Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.data2vec.modeling_data2vec_vision.Data2VecVisionModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.data2vec.modeling_data2vec_vision.Data2VecVisionModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.data2vec.modeling_data2vec_vision.Data2VecVisionModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecVisionConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if
config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token
will be returned.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, Data2VecVisionModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base")
model = Data2VecVisionModel.from_pretrained("facebook/data2vec-vision-base")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
Data2VecVisionForImageClassification
class transformers.Data2VecVisionForImageClassification
<
source
>
(
config: Data2VecVisionConfig
)
Parameters
config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecVision Model transformer with an image classification head on top (a linear layer on top of the average of
the final hidden states of the patch tokens) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecVisionConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecVisionForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, Data2VecVisionForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base-ft1k")
model = Data2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base-ft1k")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
remote control, remote
Data2VecVisionForSemanticSegmentation
class transformers.Data2VecVisionForSemanticSegmentation
<
source
>
(
config: Data2VecVisionConfig
)
Parameters
config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecVision Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Data2VecVisionConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Data2VecVisionForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, Data2VecVisionForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base")
model = Data2VecVisionForSemanticSegmentation.from_pretrained("facebook/data2vec-vision-base")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
TFData2VecVisionModel
class transformers.TFData2VecVisionModel
<
source
>
(
*args
**kwargs
)
Parameters
config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Data2VecVision Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.).
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
bool_masked_pos: tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.data2vec.modeling_tf_data2vec_vision.TFData2VecVisionModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
bool_masked_pos (tf.Tensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.data2vec.modeling_tf_data2vec_vision.TFData2VecVisionModelOutputWithPooling or tuple(tf.Tensor)
A transformers.models.data2vec.modeling_tf_data2vec_vision.TFData2VecVisionModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Data2VecVisionConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if
config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token
will be returned.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFData2VecVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFData2VecVisionModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base")
model = TFData2VecVisionModel.from_pretrained("facebook/data2vec-vision-base")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
TFData2VecVisionForImageClassification
class transformers.TFData2VecVisionForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecVision Model transformer with an image classification head on top (a linear layer on top of the average of
the final hidden states of the patch tokens) e.g. for ImageNet.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.).
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
head_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Data2VecVisionConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFData2VecVisionForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFData2VecVisionForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base-ft1k")
model = TFData2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base-ft1k")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
remote control, remote
TFData2VecVisionForSemanticSegmentation
class transformers.TFData2VecVisionForSemanticSegmentation
<
source
>
(
*args
**kwargs
)
Parameters
config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Data2VecVision Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.).
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
)
→
transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (Data2VecVisionConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFData2VecVisionForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFData2VecVisionForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base")
model = TFData2VecVisionForSemanticSegmentation.from_pretrained("facebook/data2vec-vision-base")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
←CLIPSeg
DePlot→
Data2Vec
Overview
Resources
Data2VecTextConfig
Data2VecAudioConfig
Data2VecVisionConfig
Data2VecAudioModel
Data2VecAudioForAudioFrameClassification
Data2VecAudioForCTC
Data2VecAudioForSequenceClassification
Data2VecAudioForXVector
Data2VecTextModel
Data2VecTextForCausalLM
Data2VecTextForMaskedLM
Data2VecTextForSequenceClassification
Data2VecTextForMultipleChoice
Data2VecTextForTokenClassification
Data2VecTextForQuestionAnswering
Data2VecVisionModel
Data2VecVisionForImageClassification
Data2VecVisionForSemanticSegmentation
TFData2VecVisionModel
TFData2VecVisionForImageClassification
TFData2VecVisionForSemanticSegmentation
|
DeBERTa-v2
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s
BERT model released in 2018 and Facebook’s RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
The following information is visible directly on the original implementation
repository. DeBERTa v2 is the second version of the DeBERTa model. It includes
the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can
find more details about this submission in the authors’
blog
New in v2:
Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data.
Instead of a GPT2-based tokenizer, the tokenizer is now
sentencepiece-based tokenizer.
nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first
transformer layer to better learn the local dependency of input tokens.
Sharing position projection matrix with content projection matrix in attention layer Based on previous
experiments, this can save parameters without affecting the performance.
Apply bucket to encode relative positions The DeBERTa-v2 model uses log bucket to encode relative positions
similar to T5.
900M model & 1.5B model Two additional model sizes are available: 900M and 1.5B, which significantly improves the
performance of downstream tasks.
This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
DebertaV2Config
class transformers.DebertaV2Config
<
source
>
(
vocab_size = 128100
hidden_size = 1536
num_hidden_layers = 24
num_attention_heads = 24
intermediate_size = 6144
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 0
initializer_range = 0.02
layer_norm_eps = 1e-07
relative_attention = False
max_relative_positions = -1
pad_token_id = 0
position_biased_input = True
pos_att_type = None
pooler_dropout = 0
pooler_hidden_act = 'gelu'
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 128100) —
Vocabulary size of the DeBERTa-v2 model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling DebertaV2Model.
hidden_size (int, optional, defaults to 1536) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 24) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 6144) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu", "gelu", "tanh", "gelu_fast", "mish", "linear", "sigmoid" and "gelu_new"
are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 0) —
The vocabulary size of the token_type_ids passed when calling DebertaModel or TFDebertaModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-7) —
The epsilon used by the layer normalization layers.
relative_attention (bool, optional, defaults to True) —
Whether use relative position encoding.
max_relative_positions (int, optional, defaults to -1) —
The range of relative positions [-max_position_embeddings, max_position_embeddings]. Use the same value
as max_position_embeddings.
pad_token_id (int, optional, defaults to 0) —
The value used to pad input_ids.
position_biased_input (bool, optional, defaults to False) —
Whether add absolute position embedding to content embedding.
pos_att_type (List[str], optional) —
The type of relative position attention, it can be a combination of ["p2c", "c2p"], e.g. ["p2c"],
["p2c", "c2p"], ["p2c", "c2p"].
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a DebertaV2Model. It is used to instantiate a
DeBERTa-v2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the DeBERTa
microsoft/deberta-v2-xlarge architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import DebertaV2Config, DebertaV2Model
# Initializing a DeBERTa-v2 microsoft/deberta-v2-xlarge style configuration
configuration = DebertaV2Config()
# Initializing a model (with random weights) from the microsoft/deberta-v2-xlarge style configuration
model = DebertaV2Model(configuration)
# Accessing the model configuration
configuration = model.config
DebertaV2Tokenizer
class transformers.DebertaV2Tokenizer
<
source
>
(
vocab_file
do_lower_case = False
split_by_punct = False
bos_token = '[CLS]'
eos_token = '[SEP]'
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
bos_token (string, optional, defaults to "[CLS]") —
The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (string, optional, defaults to "[SEP]") —
The end of sequence token. When building a sequence using special tokens, this is not the token that is
used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Constructs a DeBERTa-v2 tokenizer. Based on SentencePiece.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A DeBERTa sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0
token_ids_1 = None
already_has_special_tokens = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model or encode_plus methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
DebertaV2TokenizerFast
class transformers.DebertaV2TokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = False
split_by_punct = False
bos_token = '[CLS]'
eos_token = '[SEP]'
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
bos_token (string, optional, defaults to "[CLS]") —
The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (string, optional, defaults to "[SEP]") —
The end of sequence token. When building a sequence using special tokens, this is not the token that is
used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Constructs a DeBERTa-v2 fast tokenizer. Based on SentencePiece.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A DeBERTa sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
DebertaV2Model
class transformers.DebertaV2Model
<
source
>
(
config
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaV2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaV2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaV2Model
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2Model.from_pretrained("microsoft/deberta-v2-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
DebertaV2PreTrainedModel
class transformers.DebertaV2PreTrainedModel
<
source
>
(
config: PretrainedConfig
*inputs
**kwargs
)
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
_forward_unimplemented
<
source
>
(
*input: typing.Any
)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
DebertaV2ForMaskedLM
class transformers.DebertaV2ForMaskedLM
<
source
>
(
config
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a language modeling head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaV2ForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaV2ForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v2-xlarge")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
DebertaV2ForSequenceClassification
class transformers.DebertaV2ForSequenceClassification
<
source
>
(
config
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaV2ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, DebertaV2ForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, DebertaV2ForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = DebertaV2ForSequenceClassification.from_pretrained(
... "microsoft/deberta-v2-xlarge", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
DebertaV2ForTokenClassification
class transformers.DebertaV2ForTokenClassification
<
source
>
(
config
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaV2ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaV2ForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2ForTokenClassification.from_pretrained("microsoft/deberta-v2-xlarge")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
DebertaV2ForQuestionAnswering
class transformers.DebertaV2ForQuestionAnswering
<
source
>
(
config
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaV2ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaV2ForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2ForQuestionAnswering.from_pretrained("microsoft/deberta-v2-xlarge")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([2])
target_end_index = torch.tensor([9])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
DebertaV2ForMultipleChoice
class transformers.DebertaV2ForMultipleChoice
<
source
>
(
config
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaV2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaV2ForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaV2ForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
model = DebertaV2ForMultipleChoice.from_pretrained("microsoft/deberta-v2-xlarge")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
TFDebertaV2Model
class transformers.TFDebertaV2Model
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaV2Config) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaV2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaV2Model
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge")
model = TFDebertaV2Model.from_pretrained("kamalkraj/deberta-v2-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFDebertaV2PreTrainedModel
class transformers.TFDebertaV2PreTrainedModel
<
source
>
(
*args
**kwargs
)
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
call
<
source
>
(
inputs
training = None
mask = None
)
Calls the model on new inputs and returns the outputs as tensors.
In this case call() just reapplies
all ops in the graph to the new inputs
(e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be
overridden when subclassing tf.keras.Model.
To call a model on an input, always use the __call__() method,
i.e. model(inputs), which relies on the underlying call() method.
TFDebertaV2ForMaskedLM
class transformers.TFDebertaV2ForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a language modeling head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaV2Config) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaV2ForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaV2ForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge")
model = TFDebertaV2ForMaskedLM.from_pretrained("kamalkraj/deberta-v2-xlarge")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFDebertaV2ForSequenceClassification
class transformers.TFDebertaV2ForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaV2Config) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaV2ForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaV2ForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge")
model = TFDebertaV2ForSequenceClassification.from_pretrained("kamalkraj/deberta-v2-xlarge")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFDebertaV2ForSequenceClassification.from_pretrained("kamalkraj/deberta-v2-xlarge", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFDebertaV2ForTokenClassification
class transformers.TFDebertaV2ForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaV2Config) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaV2ForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaV2ForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge")
model = TFDebertaV2ForTokenClassification.from_pretrained("kamalkraj/deberta-v2-xlarge")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFDebertaV2ForQuestionAnswering
class transformers.TFDebertaV2ForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaV2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaV2Config) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaV2ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaV2ForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge")
model = TFDebertaV2ForQuestionAnswering.from_pretrained("kamalkraj/deberta-v2-xlarge")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←DeBERTa
DialoGPT→
DeBERTa-v2
Overview
Documentation resources
DebertaV2Config
DebertaV2Tokenizer
DebertaV2TokenizerFast
DebertaV2Model
DebertaV2PreTrainedModel
DebertaV2ForMaskedLM
DebertaV2ForSequenceClassification
DebertaV2ForTokenClassification
DebertaV2ForQuestionAnswering
DebertaV2ForMultipleChoice
TFDebertaV2Model
TFDebertaV2PreTrainedModel
TFDebertaV2ForMaskedLM
TFDebertaV2ForSequenceClassification
TFDebertaV2ForTokenClassification
TFDebertaV2ForQuestionAnswering
|
Blenderbot Small
Note that BlenderbotSmallModel and
BlenderbotSmallForConditionalGeneration are only used in combination with the checkpoint
facebook/blenderbot-90M. Larger Blenderbot checkpoints should
instead be used with BlenderbotModel and
BlenderbotForConditionalGeneration
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
Tips:
Blenderbot Small is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
This model was contributed by patrickvonplaten. The authors’ code can be
found here.
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotSmallConfig
class transformers.BlenderbotSmallConfig
<
source
>
(
vocab_size = 50265
max_position_embeddings = 512
encoder_layers = 8
encoder_ffn_dim = 2048
encoder_attention_heads = 16
decoder_layers = 8
decoder_ffn_dim = 2048
decoder_attention_heads = 16
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 512
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 1
scale_embedding = False
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
forced_eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the BlenderbotSmall model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling BlenderbotSmallModel or TFBlenderbotSmallModel.
d_model (int, optional, defaults to 512) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 8) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 8) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a BlenderbotSmallModel. It is used to instantiate
an BlenderbotSmall model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the BlenderbotSmall
facebook/blenderbot_small-90M architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BlenderbotSmallConfig, BlenderbotSmallModel
# Initializing a BlenderbotSmall facebook/blenderbot_small-90M style configuration
configuration = BlenderbotSmallConfig()
# Initializing a model (with random weights) from the facebook/blenderbot_small-90M style configuration
model = BlenderbotSmallModel(configuration)
# Accessing the model configuration
configuration = model.config
BlenderbotSmallTokenizer
class transformers.BlenderbotSmallTokenizer
<
source
>
(
vocab_file
merges_file
bos_token = '__start__'
eos_token = '__end__'
unk_token = '__unk__'
pad_token = '__null__'
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
merges_file (str) —
Path to the merges file.
bos_token (str, optional, defaults to "__start__") —
The beginning of sentence token.
eos_token (str, optional, defaults to "__end__") —
The end of sentence token.
unk_token (str, optional, defaults to "__unk__") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "__pad__") —
The token used for padding, for example when batching sequences of different lengths.
**kwargs —
Additional keyword arguments passed along to PreTrainedTokenizer
Constructs a Blenderbot-90M tokenizer based on BPE (Byte-Pair-Encoding)
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
the superclass for more information regarding methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The model input with special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List
token_ids_1: typing.Optional[typing.List] = None
already_has_special_tokens: bool = False
)
→
A list of integers in the range [0, 1]
Parameters
token_ids_0 (List[int]) —
List of ids of the first sequence.
token_ids_1 (List[int], optional) —
List of ids of the second sequence.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model or encode_plus methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
BlenderbotSmallTokenizerFast
class transformers.BlenderbotSmallTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
unk_token = '<|endoftext|>'
bos_token = '<|endoftext|>'
eos_token = '<|endoftext|>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
Construct a “fast” BlenderbotSmall tokenizer (backed by HuggingFace’s tokenizers library).
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. BlenderbotSmall
does not make use of token type ids, therefore a list of zeros is returned.
BlenderbotSmallModel
class transformers.BlenderbotSmallModel
<
source
>
(
config: BlenderbotSmallConfig
)
Parameters
config (BlenderbotSmallConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare BlenderbotSmall Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotSmallConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BlenderbotSmallModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BlenderbotSmallModel
model = BlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
decoder_inputs = tokenizer("Studies show that", return_tensors="pt") # Batch size 1
outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 3, 512]
BlenderbotSmallForConditionalGeneration
class transformers.BlenderbotSmallForConditionalGeneration
<
source
>
(
config: BlenderbotSmallConfig
)
Parameters
config (BlenderbotSmallConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The BlenderbotSmall Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotSmallConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BlenderbotSmallForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Conversation example:
Copied
from transformers import AutoTokenizer, BlenderbotSmallForConditionalGeneration
mname = "facebook/blenderbot_small-90M"
model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)
Human: My friends are cool but they eat too many carbs.
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
Bot: what kind of carbs do they eat? i don't know much about carbs.
REPLY = "I'm not sure"
print("Human: ", REPLY)
Human: I'm not sure
NEXT_UTTERANCE = (
... "My friends are cool but they eat too many carbs.__end__ __start__what kind of carbs do they eat? "
... "i don't know much about carbs__end__ "
... "__start__ I'm not sure."
... )
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
Bot: they eat a lot of carbs. carbs are high in fat, protein, and fats.
BlenderbotSmallForCausalLM
class transformers.BlenderbotSmallForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotSmallConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, BlenderbotSmallForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
model = BlenderbotSmallForCausalLM.from_pretrained(
... "facebook/blenderbot_small-90M", add_cross_attention=False
... )
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
TFBlenderbotSmallModel
class transformers.TFBlenderbotSmallModel
<
source
>
(
*args
**kwargs
)
Parameters
config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BLENDERBOT_SMALL Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None
past_key_values: List[tf.Tensor] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BlenderbotSmallConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBlenderbotSmallModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBlenderbotSmallModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
model = TFBlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFBlenderbotSmallForConditionalGeneration
class transformers.TFBlenderbotSmallForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The BLENDERBOT_SMALL Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[TFBaseModelOutput] = None
past_key_values: List[tf.Tensor] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BlenderbotSmallConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBlenderbotSmallForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Conversation example::
Copied
from transformers import AutoTokenizer, TFBlenderbotSmallForConditionalGeneration
mname = "facebook/blenderbot_small-90M"
model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)
inputs = tokenizer([UTTERANCE], return_tensors="tf")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
what kind of carbs do they eat? i don't know much about carbs.
REPLY = "I'm not sure"
print("Human: ", REPLY)
NEXT_UTTERANCE = (
... "My friends are cool but they eat too many carbs.</s> "
... "<s>what kind of carbs do they eat? i don't know much about carbs.</s> "
... "<s>I'm not sure."
... )
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="tf")
inputs.pop("token_type_ids")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
FlaxBlenderbotSmallModel
class transformers.FlaxBlenderbotSmallModel
<
source
>
(
config: BlenderbotSmallConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare BlenderbotSmall Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotSmallConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotSmallModel
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
model = FlaxBlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration
model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration
model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxBlenderbotForConditionalGeneration
class transformers.FlaxBlenderbotSmallForConditionalGeneration
<
source
>
(
config: BlenderbotSmallConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The BLENDERBOT_SMALL Model with a language modeling head. Can be used for summarization.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotSmallConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBlenderbotSmallPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Summarization example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration
model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="np")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"]).sequences
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False))
Mask filling example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
TXT = "My friends are <mask> but they eat too many carbs."
model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M")
input_ids = tokenizer([TXT], return_tensors="np")["input_ids"]
logits = model(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = jax.nn.softmax(logits[0, masked_index], axis=0)
values, predictions = jax.lax.top_k(probs)
tokenizer.decode(predictions).split()
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration
model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
deterministic: bool = True
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration
model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
←Blenderbot
BLOOM→
Blenderbot Small
Overview
Documentation resources
BlenderbotSmallConfig
BlenderbotSmallTokenizer
BlenderbotSmallTokenizerFast
BlenderbotSmallModel
BlenderbotSmallForConditionalGeneration
BlenderbotSmallForCausalLM
TFBlenderbotSmallModel
TFBlenderbotSmallForConditionalGeneration
FlaxBlenderbotSmallModel
FlaxBlenderbotForConditionalGeneration
|
BEiT
Overview
The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by
Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of
Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class
of an image (as done in the original ViT paper), BEiT models are pre-trained to
predict visual tokens from the codebook of OpenAI’s DALL-E model given masked
patches.
The abstract from the paper is the following:
We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation
from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image
modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image
patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first “tokenize” the original image into
visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training
objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we
directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder.
Experimental results on image classification and semantic segmentation show that our model achieves competitive results
with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K,
significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains
86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%).
Tips:
BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They
outperform both the original model (ViT) as well as Data-efficient Image Transformers (DeiT) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as
fine-tuning on custom data here (you can just replace
ViTFeatureExtractor by BeitImageProcessor and
ViTForImageClassification by BeitForImageClassification).
There’s also a demo notebook available which showcases how to combine DALL-E’s image tokenizer with BEiT for
performing masked image modeling. You can find it here.
As the BEiT models expect each image to be of the same size (resolution), one can use
BeitImageProcessor to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of
each checkpoint. For example, microsoft/beit-base-patch16-224 refers to a base-sized architecture with patch
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of
14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the
relative position bias among the several self-attention layers. During fine-tuning, each layer’s relative position
bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to
pre-train a model from scratch, one needs to either set the use_relative_position_bias or the
use_relative_position_bias attribute of BeitConfig to True in order to add
position embeddings.
BEiT pre-training. Taken from the original paper.
This model was contributed by nielsr. The JAX/FLAX version of this model was
contributed by kamalkraj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT.
Image Classification
BeitForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Semantic segmentation
Semantic segmentation task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
BEiT specific outputs
class transformers.models.beit.modeling_beit.BeitModelOutputWithPooling
<
source
>
(
last_hidden_state: FloatTensor = None
pooler_output: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) —
Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if
config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token
will be returned.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Class for outputs of BeitModel.
class transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling
<
source
>
(
last_hidden_state: Array = None
pooler_output: Array = None
hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None
attentions: typing.Optional[typing.Tuple[jax.Array]] = None
)
Parameters
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) —
Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if
config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token
will be returned.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Class for outputs of FlaxBeitModel.
BeitConfig
class transformers.BeitConfig
<
source
>
(
vocab_size = 8192
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 224
patch_size = 16
num_channels = 3
use_mask_token = False
use_absolute_position_embeddings = False
use_relative_position_bias = False
use_shared_relative_position_bias = False
layer_scale_init_value = 0.1
drop_path_rate = 0.1
use_mean_pooling = True
out_indices = [3, 5, 7, 11]
pool_scales = [1, 2, 3, 6]
use_auxiliary_head = True
auxiliary_loss_weight = 0.4
auxiliary_channels = 256
auxiliary_num_convs = 1
auxiliary_concat_input = False
semantic_loss_ignore_index = 255
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 8092) —
Vocabulary size of the BEiT model. Defines the number of different image tokens that can be used during
pre-training.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
use_mask_token (bool, optional, defaults to False) —
Whether to use a mask token for masked image modeling.
use_absolute_position_embeddings (bool, optional, defaults to False) —
Whether to use BERT-style absolute position embeddings.
use_relative_position_bias (bool, optional, defaults to False) —
Whether to use T5-style relative position embeddings in the self-attention layers.
use_shared_relative_position_bias (bool, optional, defaults to False) —
Whether to use the same relative position embeddings across all self-attention layers of the Transformer.
layer_scale_init_value (float, optional, defaults to 0.1) —
Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate per sample (when applied in the main path of residual layers).
use_mean_pooling (bool, optional, defaults to True) —
Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the
CLS token, before applying the classification head.
out_indices (List[int], optional, defaults to [3, 5, 7, 11]) —
Indices of the feature maps to use for semantic segmentation.
pool_scales (Tuple[int], optional, defaults to [1, 2, 3, 6]) —
Pooling scales used in Pooling Pyramid Module applied on the last feature map.
use_auxiliary_head (bool, optional, defaults to True) —
Whether to use an auxiliary head during training.
auxiliary_loss_weight (float, optional, defaults to 0.4) —
Weight of the cross-entropy loss of the auxiliary head.
auxiliary_channels (int, optional, defaults to 256) —
Number of channels to use in the auxiliary head.
auxiliary_num_convs (int, optional, defaults to 1) —
Number of convolutional layers to use in the auxiliary head.
auxiliary_concat_input (bool, optional, defaults to False) —
Whether to concatenate the output of the auxiliary head with the input before the classification layer.
semantic_loss_ignore_index (int, optional, defaults to 255) —
The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a BeitModel. It is used to instantiate an BEiT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the BEiT
microsoft/beit-base-patch16-224-pt22k architecture.
Example:
Copied
from transformers import BeitConfig, BeitModel
# Initializing a BEiT beit-base-patch16-224-pt22k style configuration
configuration = BeitConfig()
# Initializing a model (with random weights) from the beit-base-patch16-224-pt22k style configuration
model = BeitModel(configuration)
# Accessing the model configuration
configuration = model.config
BeitFeatureExtractor
class transformers.BeitFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
segmentation_maps = None
**kwargs
)
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
semantic_segmentation
Parameters
outputs (BeitForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple] of length batch_size, optional) —
List of tuples corresponding to the requested final size (height, width) of each prediction. If left to
None, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor] of length batch_size, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is
specified). Each entry of each torch.Tensor correspond to a semantic class id.
Converts the output of BeitForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
BeitImageProcessor
class transformers.BeitImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_rescale: bool = True
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_reduce_labels: bool = False
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"height" -- 256, "width": 256}):
Size of the output image after resizing. Can be overridden by the size parameter in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image
is padded with 0’s and then center cropped. Can be overridden by the do_center_crop parameter in the
preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Desired output size when applying center-cropping. Only has an effect if do_center_crop is set to True.
Can be overridden by the crop_size parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
The mean to use if normalizing the image. This is a float or list of floats of length of the number of
channels of the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
The standard deviation to use if normalizing the image. This is a float or list of floats of length of the
number of channels of the image. Can be overridden by the image_std parameter in the preprocess method.
do_reduce_labels (bool, optional, defaults to False) —
Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is
used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The
background label will be replaced by 255. Can be overridden by the do_reduce_labels parameter in the
preprocess method.
Constructs a BEiT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')], NoneType] = None
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_reduce_labels: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after center crop. If one edge the image is smaller than crop_size, it will be
padded with zeros and then cropped
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
do_reduce_labels (bool, optional, defaults to self.do_reduce_labels) —
Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g.
ADE20k). The background label will be replaced by 255.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
post_process_semantic_segmentation
<
source
>
(
outputs
target_sizes: typing.List[typing.Tuple] = None
)
→
semantic_segmentation
Parameters
outputs (BeitForSemanticSegmentation) —
Raw outputs of the model.
target_sizes (List[Tuple] of length batch_size, optional) —
List of tuples corresponding to the requested final size (height, width) of each prediction. If left to
None, predictions will not be resized.
Returns
semantic_segmentation
List[torch.Tensor] of length batch_size, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is
specified). Each entry of each torch.Tensor correspond to a semantic class id.
Converts the output of BeitForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
BeitModel
class transformers.BeitModel
<
source
>
(
config: BeitConfig
add_pooling_layer: bool = True
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Beit Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.beit.modeling_beit.BeitModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.beit.modeling_beit.BeitModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.beit.modeling_beit.BeitModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BeitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if
config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token
will be returned.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BeitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, BeitModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
model = BeitModel.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
BeitForMaskedImageModeling
class transformers.BeitForMaskedImageModeling
<
source
>
(
config: BeitConfig
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Beit Model transformer with a ‘language’ modeling head on top. BEiT does masked image modeling by predicting
visual tokens of a Vector-Quantize Variational Autoencoder (VQ-VAE), whereas other vision models like ViT and DeiT
predict RGB pixel values. As a result, this class is incompatible with AutoModelForMaskedImageModeling, so you
will need to use BeitForMaskedImageModeling directly if you wish to do masked image modeling with BEiT.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BeitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BeitForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, BeitForMaskedImageModeling
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
model = BeitForMaskedImageModeling.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, logits = outputs.loss, outputs.logits
list(logits.shape)
[1, 196, 8192]
BeitForImageClassification
class transformers.BeitForImageClassification
<
source
>
(
config: BeitConfig
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Beit Model transformer with an image classification head on top (a linear layer on top of the average of the final
hidden states of the patch tokens) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BeitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BeitForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, BeitForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224")
model = BeitForImageClassification.from_pretrained("microsoft/beit-base-patch16-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
BeitForSemanticSegmentation
class transformers.BeitForSemanticSegmentation
<
source
>
(
config: BeitConfig
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Beit Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
BeitImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, height, width), optional) —
Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BeitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel.
The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is
to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the
original image size as post-processing. You should always check your logits shape and resize as needed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, patch_size, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BeitForSemanticSegmentation forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, BeitForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-finetuned-ade-640-640")
model = BeitForSemanticSegmentation.from_pretrained("microsoft/beit-base-finetuned-ade-640-640")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height, width)
logits = outputs.logits
FlaxBeitModel
class transformers.FlaxBeitModel
<
source
>
(
config: BeitConfig
input_shape = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare Beit Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
bool_masked_pos = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling or tuple(torch.FloatTensor)
Returns
transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.beit.configuration_beit.BeitConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if
config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token
will be returned.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The FlaxBeitPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, FlaxBeitModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k")
model = FlaxBeitModel.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxBeitForMaskedImageModeling
class transformers.FlaxBeitForMaskedImageModeling
<
source
>
(
config: BeitConfig
input_shape = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Beit Model transformer with a ‘language’ modeling head on top (to predict visual tokens).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
bool_masked_pos = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.beit.configuration_beit.BeitConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBeitPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
bool_masked_pos (numpy.ndarray of shape (batch_size, num_patches)):
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Examples:
Copied
from transformers import AutoImageProcessor, BeitForMaskedImageModeling
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
model = BeitForMaskedImageModeling.from_pretrained("microsoft/beit-base-patch16-224-pt22k")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
logits = outputs.logits
FlaxBeitForImageClassification
class transformers.FlaxBeitForImageClassification
<
source
>
(
config: BeitConfig
input_shape = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BeitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
Beit Model transformer with an image classification head on top (a linear layer on top of the average of the final
hidden states of the patch tokens) e.g. for ImageNet.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
pixel_values
bool_masked_pos = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.beit.configuration_beit.BeitConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBeitPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FlaxBeitForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224")
model = FlaxBeitForImageClassification.from_pretrained("microsoft/beit-base-patch16-224")
inputs = image_processor(images=image, return_tensors="np")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
←YOSO
BiT→
BEiT
Overview
Resources
BEiT specific outputs
BeitConfig
BeitFeatureExtractor
BeitImageProcessor
BeitModel
BeitForMaskedImageModeling
BeitForImageClassification
BeitForSemanticSegmentation
FlaxBeitModel
FlaxBeitForMaskedImageModeling
FlaxBeitForImageClassification
|
ELECTRA
Overview
The ELECTRA model was proposed in the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators. ELECTRA is a new pretraining approach which trains two
transformer models: the generator and the discriminator. The generator’s role is to replace tokens in a sequence, and
is therefore trained as a masked language model. The discriminator, which is the model we’re interested in, tries to
identify which tokens were replaced by the generator in the sequence.
The abstract from the paper is the following:
Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK]
and then train a model to reconstruct the original tokens. While they produce good results when transferred to
downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a
more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach
corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead
of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that
predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments
demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens
rather than just the small subset that was masked out. As a result, the contextual representations learned by our
approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are
particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained
using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale,
where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when
using the same amount of compute.
Tips:
ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The
only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller,
while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their
embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection
layer is used.
ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps.
The ELECTRA checkpoints saved using Google Research’s implementation
contain both the generator and discriminator. The conversion script requires the user to name which model to export
into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all
available ELECTRA models, however. This means that the discriminator may be loaded in the
ElectraForMaskedLM model, and the generator may be loaded in the
ElectraForPreTraining model (the classification head will be randomly initialized as it
doesn’t exist in the generator).
This model was contributed by lysandre. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
ElectraConfig
class transformers.ElectraConfig
<
source
>
(
vocab_size = 30522
embedding_size = 128
hidden_size = 256
num_hidden_layers = 12
num_attention_heads = 4
intermediate_size = 1024
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
summary_type = 'first'
summary_use_proj = True
summary_activation = 'gelu'
summary_last_dropout = 0.1
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the ELECTRA model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling ElectraModel or TFElectraModel.
embedding_size (int, optional, defaults to 128) —
Dimensionality of the encoder layers and the pooler layer.
hidden_size (int, optional, defaults to 256) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 1024) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling ElectraModel or TFElectraModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
summary_type (str, optional, defaults to "first") —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Has to be one of the following options:
"last": Take the last token hidden state (like XLNet).
"first": Take the first token hidden state (like BERT).
"mean": Take the mean of all tokens hidden states.
"cls_index": Supply a Tensor of classification token position (like GPT/GPT-2).
"attn": Not implemented now, use multi-head attention.
summary_use_proj (bool, optional, defaults to True) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Whether or not to add a projection after the vector extraction.
summary_activation (str, optional) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Pass "gelu" for a gelu activation to the output, any other value will result in no activation.
summary_last_dropout (float, optional, defaults to 0.0) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
The dropout ratio to be used after the projection and activation.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a ElectraModel or a TFElectraModel. It is
used to instantiate a ELECTRA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the ELECTRA
google/electra-small-discriminator architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import ElectraConfig, ElectraModel
# Initializing a ELECTRA electra-base-uncased style configuration
configuration = ElectraConfig()
# Initializing a model (with random weights) from the electra-base-uncased style configuration
model = ElectraModel(configuration)
# Accessing the model configuration
configuration = model.config
ElectraTokenizer
class transformers.ElectraTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original Electra).
Construct a Electra tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Electra sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Electra
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
ElectraTokenizerFast
class transformers.ElectraTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original ELECTRA).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” ELECTRA tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A ELECTRA sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ELECTRA
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
Electra specific outputs
class transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss of the ELECTRA objective.
logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of ElectraForPreTraining.
class transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput
<
source
>
(
logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) —
Total loss of the ELECTRA objective.
logits (tf.Tensor of shape (batch_size, sequence_length)) —
Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of TFElectraForPreTraining.
ElectraModel
class transformers.ElectraModel
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different. Both the generator and discriminator checkpoints may be loaded into this model.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The ElectraModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ElectraModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = ElectraModel.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ElectraForPreTraining
class transformers.ElectraForPreTraining
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a binary classification head on top as used during pretraining for identifying generated tokens.
It is recommended to load the discriminator checkpoint into that model.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the ELECTRA loss. Input should be a sequence of tokens (see input_ids docstring)
Indices should be in [0, 1]:
0 indicates the token is an original token,
1 indicates the token was replaced.
Returns
transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss of the ELECTRA objective.
logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ElectraForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import ElectraForPreTraining, AutoTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator")
tokenizer = AutoTokenizer.from_pretrained("google/electra-base-discriminator")
sentence = "The quick brown fox jumps over the lazy dog"
fake_sentence = "The quick brown fox fake over the lazy dog"
fake_tokens = tokenizer.tokenize(fake_sentence, add_special_tokens=True)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
fake_tokens
['[CLS]', 'the', 'quick', 'brown', 'fox', 'fake', 'over', 'the', 'lazy', 'dog', '[SEP]']
predictions.squeeze().tolist()
[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
ElectraForCausalLM
class transformers.ElectraForCausalLM
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.Tensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The ElectraForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ElectraForCausalLM, ElectraConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("google/electra-base-generator")
config = ElectraConfig.from_pretrained("google/electra-base-generator")
config.is_decoder = True
model = ElectraForCausalLM.from_pretrained("google/electra-base-generator", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
ElectraForMaskedLM
class transformers.ElectraForMaskedLM
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a language modeling head on top.
Even though both the discriminator and generator may be loaded into this model, the generator is the only model of
the two to have been trained for the masked language modeling task.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ElectraForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ElectraForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-generator")
model = ElectraForMaskedLM.from_pretrained("google/electra-small-generator")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
1.22
ElectraForSequenceClassification
class transformers.ElectraForSequenceClassification
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ElectraForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, ElectraForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-emotion")
model = ElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'joy'
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.06
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, ElectraForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-emotion")
model = ElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ElectraForSequenceClassification.from_pretrained(
... "bhadresh-savani/electra-base-emotion", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
ElectraForMultipleChoice
class transformers.ElectraForMultipleChoice
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ElectraForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ElectraForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = ElectraForMultipleChoice.from_pretrained("google/electra-small-discriminator")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
ElectraForTokenClassification
class transformers.ElectraForTokenClassification
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a token classification head on top.
Both the discriminator and generator may be loaded into this model.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ElectraForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ElectraForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english")
model = ElectraForTokenClassification.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
predicted_tokens_classes
['B-LOC', 'B-ORG', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'B-LOC', 'I-LOC']
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.11
ElectraForQuestionAnswering
class transformers.ElectraForQuestionAnswering
<
source
>
(
config
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ElectraForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ElectraForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-squad2")
model = ElectraForQuestionAnswering.from_pretrained("bhadresh-savani/electra-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
'a nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([11])
target_end_index = torch.tensor([12])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
2.64
TFElectraModel
class transformers.TFElectraModel
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different. Both the generator and discriminator checkpoints may be loaded into this model.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFElectraModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFElectraModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = TFElectraModel.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFElectraForPreTraining
class transformers.TFElectraForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a binary classification head on top as used during pretraining for identifying generated tokens.
Even though both the discriminator and generator may be loaded into this model, the discriminator is the only model
of the two to have the correct classification head to be used for this model.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) — Total loss of the ELECTRA objective.
logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFElectraForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFElectraForPreTraining
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = TFElectraForPreTraining.from_pretrained("google/electra-small-discriminator")
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
outputs = model(input_ids)
scores = outputs[0]
TFElectraForMaskedLM
class transformers.TFElectraForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a language modeling head on top.
Even though both the discriminator and generator may be loaded into this model, the generator is the only model of
the two to have been trained for the masked language modeling task.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFElectraForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFElectraForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-generator")
model = TFElectraForMaskedLM.from_pretrained("google/electra-small-generator")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
tokenizer.decode(predicted_token_id)
'paris'
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(float(outputs.loss), 2)
1.22
TFElectraForSequenceClassification
class transformers.TFElectraForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFElectraForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFElectraForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-emotion")
model = TFElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
model.config.id2label[predicted_class_id]
'joy'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
round(float(loss), 2)
0.06
TFElectraForMultipleChoice
class transformers.TFElectraForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFElectraForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFElectraForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = TFElectraForMultipleChoice.from_pretrained("google/electra-small-discriminator")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFElectraForTokenClassification
class transformers.TFElectraForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a token classification head on top.
Both the discriminator and generator may be loaded into this model.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFElectraForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFElectraForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english")
model = TFElectraForTokenClassification.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
predicted_tokens_classes
['B-LOC', 'B-ORG', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'B-LOC', 'I-LOC']
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
round(float(loss), 2)
0.11
TFElectraForQuestionAnswering
class transformers.TFElectraForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ElectraConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFElectraForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFElectraForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-squad2")
model = TFElectraForQuestionAnswering.from_pretrained("bhadresh-savani/electra-base-squad2")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
'a nice puppet'
Copied
# target is "nice puppet"
target_start_index = tf.constant([11])
target_end_index = tf.constant([12])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
round(float(loss), 2)
2.64
FlaxElectraModel
class transformers.FlaxElectraModel
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Electra Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraModel
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraModel.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxElectraForPreTraining
class transformers.FlaxElectraForPreTraining
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a binary classification head on top as used during pretraining for identifying generated tokens.
It is recommended to load the discriminator checkpoint into that model.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.models.electra.modeling_flax_electra.FlaxElectraForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.electra.modeling_flax_electra.FlaxElectraForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.electra.modeling_flax_electra.FlaxElectraForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForPreTraining
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForPreTraining.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
prediction_logits = outputs.logits
FlaxElectraForCausalLM
class transformers.FlaxElectraForCausalLM
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for
autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForCausalLM.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
FlaxElectraForMaskedLM
class transformers.FlaxElectraForMaskedLM
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForMaskedLM.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxElectraForSequenceClassification
class transformers.FlaxElectraForSequenceClassification
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForSequenceClassification.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxElectraForMultipleChoice
class transformers.FlaxElectraForMultipleChoice
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForMultipleChoice.from_pretrained("google/electra-small-discriminator")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxElectraForTokenClassification
class transformers.FlaxElectraForTokenClassification
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Electra model with a token classification head on top.
Both the discriminator and generator may be loaded into this model.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForTokenClassification.from_pretrained("google/electra-small-discriminator")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxElectraForQuestionAnswering
class transformers.FlaxElectraForQuestionAnswering
<
source
>
(
config: ElectraConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (ElectraConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ElectraConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxElectraForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator")
model = FlaxElectraForQuestionAnswering.from_pretrained("google/electra-small-discriminator")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←DPR
Encoder Decoder Models→
ELECTRA
Overview
Documentation resources
ElectraConfig
ElectraTokenizer
ElectraTokenizerFast
Electra specific outputs
ElectraModel
ElectraForPreTraining
ElectraForCausalLM
ElectraForMaskedLM
ElectraForSequenceClassification
ElectraForMultipleChoice
ElectraForTokenClassification
ElectraForQuestionAnswering
TFElectraModel
TFElectraForPreTraining
TFElectraForMaskedLM
TFElectraForSequenceClassification
TFElectraForMultipleChoice
TFElectraForTokenClassification
TFElectraForQuestionAnswering
FlaxElectraModel
FlaxElectraForPreTraining
FlaxElectraForCausalLM
FlaxElectraForMaskedLM
FlaxElectraForSequenceClassification
FlaxElectraForMultipleChoice
FlaxElectraForTokenClassification
FlaxElectraForQuestionAnswering
|
MusicGen
Overview
The MusicGen model was proposed in the paper Simple and Controllable Music Generation
by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez.
MusicGen is a single stage auto-regressive Transformer model capable of generating high-quality music samples conditioned
on text descriptions or audio prompts. The text descriptions are passed through a frozen text encoder model to obtain a
sequence of hidden-state representations. MusicGen is then trained to predict discrete audio tokens, or audio codes,
conditioned on these hidden-states. These audio tokens are then decoded using an audio compression model, such as EnCodec,
to recover the audio waveform.
Through an efficient token interleaving pattern, MusicGen does not require a self-supervised semantic representation of
the text/audio prompts, thus eliminating the need to cascade multiple models to predict a set of codebooks (e.g.
hierarchically or upsampling). Instead, it is able to generate all the codebooks in a single forward pass.
The abstract from the paper is the following:
We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates
over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised
of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for
cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen
can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better
controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human
studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark.
Through ablation studies, we shed light over the importance of each of the components comprising MusicGen.
This model was contributed by sanchit-gandhi. The original code can be found
here. The pre-trained checkpoints can be found on the
Hugging Face Hub.
Generation
MusicGen is compatible with two generation modes: greedy and sampling. In practice, sampling leads to significantly
better results than greedy, thus we encourage sampling mode to be used where possible. Sampling is enabled by default,
and can be explicitly specified by setting do_sample=True in the call to MusicgenForConditionalGeneration.generate(),
or by overriding the model’s generation config (see below).
Unconditional Generation
The inputs for unconditional (or ‘null’) generation can be obtained through the method
MusicgenForConditionalGeneration.get_unconditional_inputs():
Copied
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
unconditional_inputs = model.get_unconditional_inputs(num_samples=1)
audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256)
The audio outputs are a three-dimensional Torch tensor of shape (batch_size, num_channels, sequence_length). To listen
to the generated audio samples, you can either play them in an ipynb notebook:
Copied
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
Or save them as a .wav file using a third-party library, e.g. scipy:
Copied
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
Text-Conditional Generation
The model can generate an audio sample conditioned on a text prompt through use of the MusicgenProcessor to pre-process
the inputs:
Copied
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
... padding=True,
... return_tensors="pt",
... )
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
The guidance_scale is used in classifier free guidance (CFG), setting the weighting between the conditional logits
(which are predicted from the text prompts) and the unconditional logits (which are predicted from an unconditional or
‘null’ prompt). Higher guidance scale encourages the model to generate samples that are more closely linked to the input
prompt, usually at the expense of poorer audio quality. CFG is enabled by setting guidance_scale > 1. For best results,
use guidance_scale=3 (default).
Audio-Prompted Generation
The same MusicgenProcessor can be used to pre-process an audio prompt that is used for audio continuation. In the
following example, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command
below:
Copied
pip install --upgrade pip
pip install datasets[audio]
Copied
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
# take the first half of the audio sample
sample["array"] = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
... audio=sample["array"],
... sampling_rate=sample["sampling_rate"],
... text=["80s blues track with groovy saxophone"],
... padding=True,
... return_tensors="pt",
... )
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
For batched audio-prompted generation, the generated audio_values can be post-processed to remove padding by using the
MusicgenProcessor class:
Copied
from transformers import AutoProcessor, MusicgenForConditionalGeneration
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
sample = next(iter(dataset))["audio"]
# take the first quarter of the audio sample
sample_1 = sample["array"][: len(sample["array"]) // 4]
# take the first half of the audio sample
sample_2 = sample["array"][: len(sample["array"]) // 2]
inputs = processor(
... audio=[sample_1, sample_2],
... sampling_rate=sample["sampling_rate"],
... text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"],
... padding=True,
... return_tensors="pt",
... )
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
# post-process to remove padding from the batched audio
audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask)
Generation Configuration
The default parameters that control the generation process, such as sampling, guidance scale and number of generated
tokens, can be found in the model’s generation config, and updated as desired:
Copied
from transformers import MusicgenForConditionalGeneration
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
# inspect the default generation config
model.generation_config
# increase the guidance scale to 4.0
model.generation_config.guidance_scale = 4.0
# decrease the max length to 256 tokens
model.generation_config.max_length = 256
Note that any arguments passed to the generate method will supersede those in the generation config, so setting
do_sample=False in the call to generate will supersede the setting of model.generation_config.do_sample in the
generation config.
Model Structure
The MusicGen model can be de-composed into three distinct stages:
Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations
Audio encoder/decoder: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder
Thus, the MusicGen model can either be used as a standalone decoder model, corresponding to the class MusicgenForCausalLM,
or as a composite model that includes the text encoder and audio encoder/decoder, corresponding to the class
MusicgenForConditionalGeneration.
Since the text encoder and audio encoder/decoder models are frozen during training, the MusicGen decoder MusicgenForCausalLM
can be trained standalone on a dataset of encoder hidden-states and audio codes. For inference, the trained decoder can
be combined with the frozen text encoder and audio encoder/decoders to recover the composite MusicgenForConditionalGeneration
model.
Below, we demonstrate how to construct the composite MusicgenForConditionalGeneration model from its three constituent
parts, as would typically be done following training of the MusicGen decoder LM:
Copied
from transformers import AutoConfig, AutoModelForTextEncoding, AutoModel, MusicgenForCausalLM, MusicgenForConditionalGeneration
text_encoder = AutoModelForTextEncoding.from_pretrained("t5-base")
audio_encoder = AutoModel.from_pretrained("facebook/encodec_32khz")
decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder
decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
model = MusicgenForConditionalGeneration.from_sub_models_pretrained(text_encoder, audio_encoder, decoder)
If only the decoder needs to be loaded from the pre-trained checkpoint for the composite model, it can be loaded by first
specifying the correct config, or be accessed through the .decoder attribute of the composite model:
Copied
from transformers import AutoConfig, MusicgenForCausalLM, MusicgenForConditionalGeneration
# Option 1: get decoder config and pass to `.from_pretrained`
decoder_config = AutoConfig.from_pretrained("facebook/musicgen-small").decoder
decoder = MusicgenForCausalLM.from_pretrained("facebook/musicgen-small", **decoder_config)
# Option 2: load the entire composite model, but only return the decoder
decoder = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small").decoder
Tips:
MusicGen is trained on the 32kHz checkpoint of Encodec. You should ensure you use a compatible version of the Encodec model.
Sampling mode tends to deliver better results than greedy - you can toggle sampling with the variable do_sample in the call to MusicgenForConditionalGeneration.generate()
MusicgenDecoderConfig
class transformers.MusicgenDecoderConfig
<
source
>
(
vocab_size = 2048
max_position_embeddings = 2048
num_hidden_layers = 24
ffn_dim = 4096
num_attention_heads = 16
layerdrop = 0.0
use_cache = True
activation_function = 'gelu'
hidden_size = 1024
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
initializer_factor = 0.02
scale_embedding = False
num_codebooks = 4
pad_token_id = 2048
bos_token_id = 2048
eos_token_id = None
tie_word_embeddings = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 2048) —
Vocabulary size of the MusicgenDecoder model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling MusicgenDecoder.
hidden_size (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of decoder layers.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer block.
ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer block.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the decoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, text_encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically, set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_factor (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(hidden_size).
use_cache (bool, optional, defaults to True) —
Whether the model should return the last key/values attentions (not used by all models)
num_codebooks (int, optional, defaults to 4) —
The number of parallel codebooks forwarded to the model.
tie_word_embeddings(bool, optional, defaults to False) —
Whether input and output word embeddings should be tied.
This is the configuration class to store the configuration of an MusicgenDecoder. It is used to instantiate a
MusicGen decoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MusicGen
facebook/musicgen-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
MusicgenConfig
class transformers.MusicgenConfig
<
source
>
(
**kwargs
)
Parameters
kwargs (optional) —
Dictionary of keyword arguments. Notably:
text_encoder (PretrainedConfig, optional) — An instance of a configuration object that
defines the text encoder config.
audio_encoder (PretrainedConfig, optional) — An instance of a configuration object that
defines the audio encoder config.
decoder (PretrainedConfig, optional) — An instance of a configuration object that defines
the decoder config.
This is the configuration class to store the configuration of a MusicgenModel. It is used to instantiate a
MusicGen model according to the specified arguments, defining the text encoder, audio encoder and MusicGen decoder
configs.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import (
... MusicgenConfig,
... MusicgenDecoderConfig,
... T5Config,
... EncodecConfig,
... MusicgenForConditionalGeneration,
... )
# Initializing text encoder, audio encoder, and decoder model configurations
text_encoder_config = T5Config()
audio_encoder_config = EncodecConfig()
decoder_config = MusicgenDecoderConfig()
configuration = MusicgenConfig.from_sub_models_config(
... text_encoder_config, audio_encoder_config, decoder_config
... )
# Initializing a MusicgenForConditionalGeneration (with random weights) from the facebook/musicgen-small style configuration
model = MusicgenForConditionalGeneration(configuration)
# Accessing the model configuration
configuration = model.config
config_text_encoder = model.config.text_encoder
config_audio_encoder = model.config.audio_encoder
config_decoder = model.config.decoder
# Saving the model, including its configuration
model.save_pretrained("musicgen-model")
# loading model and config from pretrained folder
musicgen_config = MusicgenConfig.from_pretrained("musicgen-model")
model = MusicgenForConditionalGeneration.from_pretrained("musicgen-model", config=musicgen_config)
from_sub_models_config
<
source
>
(
text_encoder_config: PretrainedConfig
audio_encoder_config: PretrainedConfig
decoder_config: MusicgenDecoderConfig
**kwargs
)
→
MusicgenConfig
Returns
MusicgenConfig
An instance of a configuration object
Instantiate a MusicgenConfig (or a derived class) from text encoder, audio encoder and decoder
configurations.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
MusicgenProcessor
class transformers.MusicgenProcessor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (EncodecFeatureExtractor) —
An instance of EncodecFeatureExtractor. The feature extractor is a required input.
tokenizer (T5Tokenizer) —
An instance of T5Tokenizer. The tokenizer is a required input.
Constructs a MusicGen processor which wraps an EnCodec feature extractor and a T5 tokenizer into a single processor
class.
MusicgenProcessor offers all the functionalities of EncodecFeatureExtractor and TTokenizer. See
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method is used to decode either batches of audio outputs from the MusicGen model, or batches of token ids
from the tokenizer. In the case of decoding token ids, this method forwards all its arguments to T5Tokenizer’s
batch_decode(). Please refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to T5Tokenizer’s decode(). Please refer to the
docstring of this method for more information.
MusicgenModel
class transformers.MusicgenModel
<
source
>
(
config: MusicgenDecoderConfig
)
Parameters
config (MusicgenConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Musicgen decoder model outputting raw hidden-states without any specific head on top.
The Musicgen model was proposed in Simple and Controllable Music Generation by
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an
encoder decoder transformer trained on the task of conditional music generation
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_codebooks, sequence_length)) —
Indices of input sequence tokens in the vocabulary, corresponding to the sequence of audio codes.
Indices can be obtained by encoding an audio prompt with an audio encoder model to predict audio codes,
such as with the EncodecModel. See EncodecModel.encode() for details.
What are input IDs?
The input_ids will automatically be converted from shape (batch_size * num_codebooks, target_sequence_length) to (batch_size, num_codebooks, target_sequence_length) in the forward pass. If
you obtain audio codes from an audio encoding model, such as EncodecModel, ensure that the number of
frames is equal to 1, and that you reshape the audio codes from (frames, batch_size, num_codebooks, target_sequence_length) to (batch_size * num_codebooks, target_sequence_length) prior to passing them as
input_ids.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (torch.LongTensor of shape (batch_size, encoder_sequence_length), optional) —
Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing
cross-attention on hidden heads. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The MusicgenModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
MusicgenForCausalLM
class transformers.MusicgenForCausalLM
<
source
>
(
config: MusicgenDecoderConfig
)
Parameters
config (MusicgenConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The MusicGen decoder model with a language modelling head on top.
The Musicgen model was proposed in Simple and Controllable Music Generation by
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an
encoder decoder transformer trained on the task of conditional music generation
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_codebooks, sequence_length)) —
Indices of input sequence tokens in the vocabulary, corresponding to the sequence of audio codes.
Indices can be obtained by encoding an audio prompt with an audio encoder model to predict audio codes,
such as with the EncodecModel. See EncodecModel.encode() for details.
What are input IDs?
The input_ids will automatically be converted from shape (batch_size * num_codebooks, target_sequence_length) to (batch_size, num_codebooks, target_sequence_length) in the forward pass. If
you obtain audio codes from an audio encoding model, such as EncodecModel, ensure that the number of
frames is equal to 1, and that you reshape the audio codes from (frames, batch_size, num_codebooks, target_sequence_length) to (batch_size * num_codebooks, target_sequence_length) prior to passing them as
input_ids.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of
the decoder.
encoder_attention_mask (torch.LongTensor of shape (batch_size, encoder_sequence_length), optional) —
Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing
cross-attention on hidden heads. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns:
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor): A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MusicgenConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MusicgenForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
MusicgenForConditionalGeneration
class transformers.MusicgenForConditionalGeneration
<
source
>
(
config: typing.Optional[transformers.models.musicgen.configuration_musicgen.MusicgenConfig] = None
text_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
audio_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
decoder: typing.Optional[transformers.models.musicgen.modeling_musicgen.MusicgenForCausalLM] = None
)
Parameters
config (MusicgenConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The composite MusicGen model with a text encoder, audio encoder and Musicgen decoder,for music generation tasks with one or both of text and audio prompts.
The Musicgen model was proposed in Simple and Controllable Music Generation by
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez. It is an
encoder decoder transformer trained on the task of conditional music generation
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.BoolTensor] = None
input_values: typing.Optional[torch.FloatTensor] = None
padding_mask: typing.Optional[torch.BoolTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size * num_codebooks, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary, corresponding to the sequence of audio codes.
Indices can be obtained by encoding an audio prompt with an audio encoder model to predict audio codes,
such as with the EncodecModel. See EncodecModel.encode() for details.
What are decoder input IDs?
The decoder_input_ids will automatically be converted from shape (batch_size * num_codebooks, target_sequence_length) to (batch_size, num_codebooks, target_sequence_length) in the forward pass. If
you obtain audio codes from an audio encoding model, such as EncodecModel, ensure that the number of
frames is equal to 1, and that you reshape the audio codes from (frames, batch_size, num_codebooks, target_sequence_length) to (batch_size * num_codebooks, target_sequence_length) prior to passing them as
decoder_input_ids.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MusicgenConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MusicgenForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, MusicgenForConditionalGeneration
import torch
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
... text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
... padding=True,
... return_tensors="pt",
... )
pad_token_id = model.generation_config.pad_token_id
decoder_input_ids = (
... torch.ones((inputs.input_ids.shape[0] * model.decoder.num_codebooks, 1), dtype=torch.long)
... * pad_token_id
... )
logits = model(**inputs, decoder_input_ids=decoder_input_ids).logits
logits.shape # (bsz * num_codebooks, tgt_len, vocab_size)
torch.Size([8, 1, 2048])
←MMS
SEW→
MusicGen
Overview
Generation
Unconditional Generation
Text-Conditional Generation
Audio-Prompted Generation
Generation Configuration
Model Structure
MusicgenDecoderConfig
MusicgenConfig
MusicgenProcessor
MusicgenModel
MusicgenForCausalLM
MusicgenForConditionalGeneration
|
SwiftFormer
Overview
The SwiftFormer model was proposed in SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called ‘SwiftFormer’ is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2.
The abstract from the paper is the following:
Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called “SwiftFormer” which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2.
Tips:
One can use the ViTImageProcessor API to prepare images for the model.
This model was contributed by shehan97.
The original code can be found here.
SwiftFormerConfig
class transformers.SwiftFormerConfig
<
source
>
(
num_channels = 3
depths = [3, 3, 6, 4]
embed_dims = [48, 56, 112, 220]
mlp_ratio = 4
downsamples = [True, True, True, True]
hidden_act = 'gelu'
down_patch_size = 3
down_stride = 2
down_pad = 1
drop_path_rate = 0.0
use_layer_scale = True
layer_scale_init_value = 1e-05
batch_norm_eps = 1e-05
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels
depths (List[int], optional, defaults to [3, 3, 6, 4]) —
Depth of each stage
embed_dims (List[int], optional, defaults to [48, 56, 112, 220]) —
The embedding dimension at each stage
mlp_ratio (int, optional, defaults to 4) —
Ratio of size of the hidden dimensionality of an MLP to the dimensionality of its input.
downsamples (List[bool], optional, defaults to [True, True, True, True]) —
Whether or not to downsample inputs between two stages.
hidden_act (str, optional, defaults to "gelu") —
The non-linear activation function (string). "gelu", "relu", "selu" and "gelu_new" are supported.
down_patch_size (int, optional, defaults to 3) —
The size of patches in downsampling layers.
down_stride (int, optional, defaults to 2) —
The stride of convolution kernels in downsampling layers.
down_pad (int, optional, defaults to 1) —
Padding in downsampling layers.
drop_path_rate (float, optional, defaults to 0.) —
Rate at which to increase dropout probability in DropPath.
use_layer_scale (bool, optional, defaults to True) —
Whether to scale outputs from token mixers.
layer_scale_init_value (float, optional, defaults to 1e-5) —
Factor by which outputs from token mixers are scaled.
batch_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the batch normalization layers.
This is the configuration class to store the configuration of a SwiftFormerModel. It is used to instantiate an
SwiftFormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the SwiftFormer
MBZUAI/swiftformer-xs architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SwiftFormerConfig, SwiftFormerModel
# Initializing a SwiftFormer swiftformer-base-patch16-224 style configuration
configuration = SwiftFormerConfig()
# Initializing a model (with random weights) from the swiftformer-base-patch16-224 style configuration
model = SwiftFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
SwiftFormerModel
class transformers.SwiftFormerModel
<
source
>
(
config: SwiftFormerConfig
)
Parameters
config (SwiftFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SwiftFormer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwiftFormerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The SwiftFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, SwiftFormerModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("MBZUAI/swiftformer-xs")
model = SwiftFormerModel.from_pretrained("MBZUAI/swiftformer-xs")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 220, 7, 7]
SwiftFormerForImageClassification
class transformers.SwiftFormerForImageClassification
<
source
>
(
config: SwiftFormerConfig
)
Parameters
config (SwiftFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SwiftFormer Model transformer with an image classification head on top (e.g. for ImageNet).
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SwiftFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The SwiftFormerForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, SwiftFormerForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("MBZUAI/swiftformer-xs")
model = SwiftFormerForImageClassification.from_pretrained("MBZUAI/swiftformer-xs")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←SegFormer
Swin Transformer→
SwiftFormer
Overview
SwiftFormerConfig
SwiftFormerModel
SwiftFormerForImageClassification
|
EfficientFormer
Overview
The EfficientFormer model was proposed in EfficientFormer: Vision Transformers at MobileNet Speed
by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a
dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object
detection and semantic segmentation.
The abstract from the paper is the following:
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks.
However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally
times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly
challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation
complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still
unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance?
To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs.
Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm.
Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer.
Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices.
Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on
iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model,
EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can
reach extremely low latency on mobile devices while maintaining high performance.
This model was contributed by novice03 and Bearnardd.
The original code can be found here. The TensorFlow version of this model was added by D-Roberts.
Documentation resources
Image classification task guide
EfficientFormerConfig
class transformers.EfficientFormerConfig
<
source
>
(
depths: typing.List[int] = [3, 2, 6, 4]
hidden_sizes: typing.List[int] = [48, 96, 224, 448]
downsamples: typing.List[bool] = [True, True, True, True]
dim: int = 448
key_dim: int = 32
attention_ratio: int = 4
resolution: int = 7
num_hidden_layers: int = 5
num_attention_heads: int = 8
mlp_expansion_ratio: int = 4
hidden_dropout_prob: float = 0.0
patch_size: int = 16
num_channels: int = 3
pool_size: int = 3
downsample_patch_size: int = 3
downsample_stride: int = 2
downsample_pad: int = 1
drop_path_rate: float = 0.0
num_meta3d_blocks: int = 1
distillation: bool = True
use_layer_scale: bool = True
layer_scale_init_value: float = 1e-05
hidden_act: str = 'gelu'
initializer_range: float = 0.02
layer_norm_eps: float = 1e-12
image_size: int = 224
batch_norm_eps: float = 1e-05
**kwargs
)
Parameters
depths (List(int), optional, defaults to [3, 2, 6, 4]) —
Depth of each stage.
hidden_sizes (List(int), optional, defaults to [48, 96, 224, 448]) —
Dimensionality of each stage.
downsamples (List(bool), optional, defaults to [True, True, True, True]) —
Whether or not to downsample inputs between two stages.
dim (int, optional, defaults to 448) —
Number of channels in Meta3D layers
key_dim (int, optional, defaults to 32) —
The size of the key in meta3D block.
attention_ratio (int, optional, defaults to 4) —
Ratio of the dimension of the query and value to the dimension of the key in MSHA block
resolution (int, optional, defaults to 7) —
Size of each patch
num_hidden_layers (int, optional, defaults to 5) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the 3D MetaBlock.
mlp_expansion_ratio (int, optional, defaults to 4) —
Ratio of size of the hidden dimensionality of an MLP to the dimensionality of its input.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings and encoder.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
pool_size (int, optional, defaults to 3) —
Kernel size of pooling layers.
downsample_patch_size (int, optional, defaults to 3) —
The size of patches in downsampling layers.
downsample_stride (int, optional, defaults to 2) —
The stride of convolution kernels in downsampling layers.
downsample_pad (int, optional, defaults to 1) —
Padding in downsampling layers.
drop_path_rate (int, optional, defaults to 0) —
Rate at which to increase dropout probability in DropPath.
num_meta3d_blocks (int, optional, defaults to 1) —
The number of 3D MetaBlocks in the last stage.
distillation (bool, optional, defaults to True) —
Whether to add a distillation head.
use_layer_scale (bool, optional, defaults to True) —
Whether to scale outputs from token mixers.
layer_scale_init_value (float, optional, defaults to 1e-5) —
Factor by which outputs from token mixers are scaled.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
This is the configuration class to store the configuration of an EfficientFormerModel. It is used to
instantiate an EfficientFormer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the EfficientFormer
snap-research/efficientformer-l1 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import EfficientFormerConfig, EfficientFormerModel
# Initializing a EfficientFormer efficientformer-l1 style configuration
configuration = EfficientFormerConfig()
# Initializing a EfficientFormerModel (with random weights) from the efficientformer-l3 style configuration
model = EfficientFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
EfficientFormerImageProcessor
class transformers.EfficientFormerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
crop_size: typing.Dict[str, int] = None
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified (size["height"], size["width"]). Can be overridden by the do_resize parameter in the preprocess method.
size (dict, optional, defaults to {"height" -- 224, "width": 224}):
Size of the output image after resizing. Can be overridden by the size parameter in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the
preprocess method.
crop_size (Dict[str, int] optional, defaults to 224) —
Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess
method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a EfficientFormer image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: int = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Dictionary in the format {"height": h, "width": w} specifying the size of the output image after
resizing.
resample (PILImageResampling filter, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has
an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use if do_normalize is set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
EfficientFormerModel
class transformers.EfficientFormerModel
<
source
>
(
config: EfficientFormerConfig
)
Parameters
config (EfficientFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare EfficientFormer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch nn.Module subclass. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using ViTImageProcessor. See
ViTImageProcessor.preprocess() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EfficientFormerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The EfficientFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, EfficientFormerModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300")
model = EfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 49, 448]
EfficientFormerForImageClassification
class transformers.EfficientFormerForImageClassification
<
source
>
(
config: EfficientFormerConfig
)
Parameters
config (EfficientFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
EfficientFormer Model transformer with an image classification head on top (a linear layer on top of the final
hidden state of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch nn.Module subclass. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using ViTImageProcessor. See
ViTImageProcessor.preprocess() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EfficientFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The EfficientFormerForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, EfficientFormerForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300")
model = EfficientFormerForImageClassification.from_pretrained("snap-research/efficientformer-l1-300")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
Egyptian cat
EfficientFormerForImageClassificationWithTeacher
class transformers.EfficientFormerForImageClassificationWithTeacher
<
source
>
(
config: EfficientFormerConfig
)
Parameters
config (EfficientFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
EfficientFormer Model transformer with image classification heads on top (a linear layer on top of the final hidden
state of the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for
ImageNet.
This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet
supported.
This model is a PyTorch nn.Module subclass. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.efficientformer.modeling_efficientformer.EfficientFormerForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using ViTImageProcessor. See
ViTImageProcessor.preprocess() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.efficientformer.modeling_efficientformer.EfficientFormerForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
A transformers.models.efficientformer.modeling_efficientformer.EfficientFormerForImageClassificationWithTeacherOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EfficientFormerConfig) and inputs.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits.
cls_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the
class token).
distillation_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the
distillation token).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The EfficientFormerForImageClassificationWithTeacher forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, EfficientFormerForImageClassificationWithTeacher
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300")
model = EfficientFormerForImageClassificationWithTeacher.from_pretrained("snap-research/efficientformer-l1-300")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
Egyptian cat
TFEfficientFormerModel
class transformers.TFEfficientFormerModel
<
source
>
(
*args
**kwargs
)
Parameters
config (EfficientFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare EfficientFormer Model transformer outputting raw hidden-states without any specific head on top.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: typing.Optional[tensorflow.python.framework.ops.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values ((tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
EfficientFormerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EfficientFormerConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFEfficientFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFEfficientFormerModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300")
model = TFEfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 49, 448]
TFEfficientFormerForImageClassification
class transformers.TFEfficientFormerForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (EfficientFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
EfficientFormer Model transformer with an image classification head on top of pooled last hidden state, e.g. for
ImageNet.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: typing.Optional[tensorflow.python.framework.ops.Tensor] = None
labels: typing.Optional[tensorflow.python.framework.ops.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values ((tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
EfficientFormerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFImageClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EfficientFormerConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called
feature maps) of the model at the output of each stage.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFEfficientFormerForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFEfficientFormerForImageClassification
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300")
model = TFEfficientFormerForImageClassification.from_pretrained("snap-research/efficientformer-l1-300")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
LABEL_281
TFEfficientFormerForImageClassificationWithTeacher
class transformers.TFEfficientFormerForImageClassificationWithTeacher
<
source
>
(
*args
**kwargs
)
Parameters
config (EfficientFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
EfficientFormer Model transformer with image classification heads on top (a linear layer on top of the final hidden
state and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet.
.. warning::
This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet
supported.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: typing.Optional[tensorflow.python.framework.ops.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
training: bool = False
)
→
transformers.models.efficientformer.modeling_tf_efficientformer.TFEfficientFormerForImageClassificationWithTeacherOutput or tuple(tf.Tensor)
Parameters
pixel_values ((tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
EfficientFormerImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.efficientformer.modeling_tf_efficientformer.TFEfficientFormerForImageClassificationWithTeacherOutput or tuple(tf.Tensor)
A transformers.models.efficientformer.modeling_tf_efficientformer.TFEfficientFormerForImageClassificationWithTeacherOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (EfficientFormerConfig) and inputs.
The TFEfficientFormerForImageClassificationWithTeacher forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Output type of EfficientFormerForImageClassificationWithTeacher.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits.
cls_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the
class token).
distillation_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the
distillation token).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when
config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when
config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Example:
Copied
from transformers import AutoImageProcessor, TFEfficientFormerForImageClassificationWithTeacher
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300")
model = TFEfficientFormerForImageClassificationWithTeacher.from_pretrained("snap-research/efficientformer-l1-300")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
LABEL_281
←DPT
EfficientNet→
EfficientFormer
Overview
Documentation resources
EfficientFormerConfig
EfficientFormerImageProcessor
EfficientFormerModel
EfficientFormerForImageClassification
EfficientFormerForImageClassificationWithTeacher
TFEfficientFormerModel
TFEfficientFormerForImageClassification
TFEfficientFormerForImageClassificationWithTeacher
|
Audio Spectrogram Transformer
Overview
The Audio Spectrogram Transformer model was proposed in AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass.
The Audio Spectrogram Transformer applies a Vision Transformer to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results
for audio classification.
The abstract from the paper is the following:
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
Tips:
When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it’s recommended to take care of the input normalization (to make
sure the input has mean of 0 and std of 0.5). ASTFeatureExtractor takes care of this. Note that it uses the AudioSet
mean and std by default. You can check ast/src/get_norm_stats.py to see how
the authors compute the stats for a downstream dataset.
Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the
PSLA paper) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task.
Audio pectrogram Transformer architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer.
Audio Classification
A notebook illustrating inference with AST for audio classification can be found here.
ASTForAudioClassification is supported by this example script and notebook.
See also: Audio classification.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ASTConfig
class transformers.ASTConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
patch_size = 16
qkv_bias = True
frequency_stride = 10
time_stride = 10
max_length = 1024
num_mel_bins = 128
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
frequency_stride (int, optional, defaults to 10) —
Frequency stride to use when patchifying the spectrograms.
time_stride (int, optional, defaults to 10) —
Temporal stride to use when patchifying the spectrograms.
max_length (int, optional, defaults to 1024) —
Temporal dimension of the spectrograms.
num_mel_bins (int, optional, defaults to 128) —
Frequency dimension of the spectrograms (number of Mel-frequency bins).
This is the configuration class to store the configuration of a ASTModel. It is used to instantiate an AST
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the AST
MIT/ast-finetuned-audioset-10-10-0.4593
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ASTConfig, ASTModel
# Initializing a AST MIT/ast-finetuned-audioset-10-10-0.4593 style configuration
configuration = ASTConfig()
# Initializing a model (with random weights) from the MIT/ast-finetuned-audioset-10-10-0.4593 style configuration
model = ASTModel(configuration)
# Accessing the model configuration
configuration = model.config
ASTFeatureExtractor
class transformers.ASTFeatureExtractor
<
source
>
(
feature_size = 1
sampling_rate = 16000
num_mel_bins = 128
max_length = 1024
padding_value = 0.0
do_normalize = True
mean = -4.2677393
std = 4.5689974
return_attention_mask = False
**kwargs
)
Parameters
feature_size (int, optional, defaults to 1) —
The feature dimension of the extracted features.
sampling_rate (int, optional, defaults to 16000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
num_mel_bins (int, optional, defaults to 128) —
Number of Mel-frequency bins.
max_length (int, optional, defaults to 1024) —
Maximum length to which to pad/truncate the extracted features.
do_normalize (bool, optional, defaults to True) —
Whether or not to normalize the log-Mel features using mean and std.
mean (float, optional, defaults to -4.2677393) —
The mean value used to normalize the log-Mel features. Uses the AudioSet mean by default.
std (float, optional, defaults to 4.5689974) —
The standard deviation value used to normalize the log-Mel features. Uses the AudioSet standard deviation
by default.
return_attention_mask (bool, optional, defaults to False) —
Whether or not call() should return attention_mask.
Constructs a Audio Spectrogram Transformer (AST) feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
This class extracts mel-filter bank features from raw speech using TorchAudio, pads/truncates them to a fixed
length and normalizes them using a mean and standard deviation.
__call__
<
source
>
(
raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
sampling_rate: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
Parameters
raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
sampling_rate (int, optional) —
The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Main method to featurize and prepare for the model one or several sequence(s).
ASTModel
class transformers.ASTModel
<
source
>
(
config: ASTConfig
)
Parameters
config (ASTConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare AST Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, max_length, num_mel_bins)) —
Float values mel features extracted from the raw audio waveform. Raw audio waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ASTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ASTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, ASTModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
model = ASTModel.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1214, 768]
ASTForAudioClassification
class transformers.ASTForAudioClassification
<
source
>
(
config: ASTConfig
)
Parameters
config (ASTConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Audio Spectrogram Transformer model with an audio classification head on top (a linear layer on top of the pooled
output) e.g. for datasets like AudioSet, Speech Commands v2.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, max_length, num_mel_bins)) —
Float values mel features extracted from the raw audio waveform. Raw audio waveform can be obtained by
loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via
the soundfile library (pip install soundfile). To prepare the array into input_features, the
AutoFeatureExtractor should be used for extracting the mel features, padding and conversion into a
tensor of type torch.FloatTensor. See call()
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the audio classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ASTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ASTForAudioClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, ASTForAudioClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-audioset-10-10-0.4593")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'Speech'
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
round(loss.item(), 2)
0.17
←YOLOS
Bark→
Audio Spectrogram Transformer
Overview
Resources
ASTConfig
ASTFeatureExtractor
ASTModel
ASTForAudioClassification
|
LongT5
Overview
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long Sequences
by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung and Yinfei Yang. It’s an
encoder-decoder transformer pre-trained in a text-to-text denoising generative setting. LongT5 model is an extension of
T5 model, and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2)
Transient-Global attention.
The abstract from the paper is the following:
Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the
performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we
explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated
attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training
(PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global}
(TGlobal), which mimics ETC’s local/global attention mechanism, but without requiring additional side-inputs. We are
able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on
question answering tasks.
Tips:
LongT5ForConditionalGeneration is an extension of T5ForConditionalGeneration exchanging the traditional
encoder self-attention layer with efficient either local attention or transient-global (tglobal) attention.
Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective
inspired by the pre-training of PegasusForConditionalGeneration.
LongT5 model is designed to work efficiently and very well on long-range sequence-to-sequence tasks where the
input sequence exceeds commonly used 512 tokens. It is capable of handling input sequences of a length up to 16,384 tokens.
For Local Attention, the sparse sliding-window local attention operation allows a given token to attend only r
tokens to the left and right of it (with r=127 by default). Local Attention does not introduce any new parameters
to the model. The complexity of the mechanism is linear in input sequence length l: O(l*r).
Transient Global Attention is an extension of the Local Attention. It, furthermore, allows each input token to
interact with all other tokens in the layer. This is achieved via splitting an input sequence into blocks of a fixed
length k (with a default k=16). Then, a global token for such a block is obtained via summing and normalizing the embeddings of every token
in the block. Thanks to this, the attention allows each token to attend to both nearby tokens like in Local attention, and
also every global token like in the case of standard global attention (transient represents the fact the global tokens
are constructed dynamically within each attention operation). As a consequence, TGlobal attention introduces
a few new parameters — global relative position biases and a layer normalization for global token’s embedding.
The complexity of this mechanism is O(l(r + l/k)).
An example showing how to evaluate a fine-tuned LongT5 model on the pubmed dataset is below.
Copied
import evaluate
from datasets import load_dataset
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
dataset = load_dataset("scientific_papers", "pubmed", split="validation")
model = (
... LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
... .to("cuda")
... .half()
... )
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
def generate_answers(batch):
... inputs_dict = tokenizer(
... batch["article"], max_length=16384, padding="max_length", truncation=True, return_tensors="pt"
... )
... input_ids = inputs_dict.input_ids.to("cuda")
... attention_mask = inputs_dict.attention_mask.to("cuda")
... output_ids = model.generate(input_ids, attention_mask=attention_mask, max_length=512, num_beams=2)
... batch["predicted_abstract"] = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
... return batch
result = dataset.map(generate_answer, batched=True, batch_size=2)
rouge = evaluate.load("rouge")
rouge.compute(predictions=result["predicted_abstract"], references=result["abstract"])
This model was contributed by stancld.
The original code can be found here.
Documentation resources
Translation task guide
Summarization task guide
LongT5Config
class transformers.LongT5Config
<
source
>
(
vocab_size = 32128
d_model = 512
d_kv = 64
d_ff = 2048
num_layers = 6
num_decoder_layers = None
num_heads = 8
local_radius = 127
global_block_size = 16
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
dropout_rate = 0.1
layer_norm_epsilon = 1e-06
initializer_factor = 1.0
feed_forward_proj = 'relu'
is_encoder_decoder = True
encoder_attention_type = 'local'
use_cache = True
pad_token_id = 0
eos_token_id = 1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32128) —
Vocabulary size of the LongT5 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LongT5Model.
d_model (int, optional, defaults to 512) —
Size of the encoder layers and the pooler layer.
d_kv (int, optional, defaults to 64) —
Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads.
d_ff (int, optional, defaults to 2048) —
Size of the intermediate feed forward layer in each LongT5Block.
num_layers (int, optional, defaults to 6) —
Number of hidden layers in the Transformer encoder.
num_decoder_layers (int, optional) —
Number of hidden layers in the Transformer decoder. Will use the same value as num_layers if not set.
num_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
local_radius (int, optional, defaults to 127) —
Number of tokens to the left/right for each token to locally self-attend in a local attention mechanism.
global_block_size (int, optional, defaults to 16) —
Lenght of blocks an input sequence is divided into for a global token representation. Used only for
encoder_attention_type = "transient-global".
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (float, optional, defaults to 0.1) —
The ratio for all dropout layers.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
feed_forward_proj (string, optional, defaults to "relu") —
Type of feed forward layer to be used. Should be one of "relu" or "gated-gelu". LongT5v1.1 uses the
"gated-gelu" feed forward projection. Original LongT5 implementation uses "gated-gelu".
encoder_attention_type (string, optional, defaults to "local") —
Type of encoder attention to be used. Should be one of "local" or "transient-global", which are
supported by LongT5 implementation.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a LongT5Model or a FlaxLongT5Model. It is
used to instantiate a LongT5 model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the LongT5
google/long-t5-local-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
LongT5Model
class transformers.LongT5Model
<
source
>
(
config: LongT5Config
)
Parameters
config (LongT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LONGT5 Model transformer outputting raw hidden-states without any specific head on top.
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long
Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo
Ni, Yun-Hsuan Sung and Yinfei Yang. It’s an encoder-decoder transformer pre-trained in a text-to-text denoising
generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different
efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
LONGT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at LONGT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The LongT5Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base")
model = LongT5Model.from_pretrained("google/long-t5-local-base")
# Let's try a very long encoder input.
input_ids = tokenizer(
... 100 * "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
LongT5ForConditionalGeneration
class transformers.LongT5ForConditionalGeneration
<
source
>
(
config: LongT5Config
)
Parameters
config (LongT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LONGT5 Model with a language modeling head on top.
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long
Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo
Ni, Yun-Hsuan Sung and Yinfei Yang. It’s an encoder-decoder transformer pre-trained in a text-to-text denoising
generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different
efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
LONGT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at LONGT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The LongT5ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
model = LongT5ForConditionalGeneration.from_pretrained(
... "Stancld/longt5-tglobal-large-16384-pubmed-3k_steps"
... )
# Let's try a very long input.
inputs = tokenizer(100 * "studies have shown that owning a dog is good for you ", return_tensors="pt")
input_ids = inputs.input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
abstractthe aim of this article is to provide an overview of the literature on the role of dog
LongT5EncoderModel
class transformers.LongT5EncoderModel
<
source
>
(
config: LongT5Config
)
Parameters
config (LongT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LONGT5 Model transformer outputting encoder’s raw hidden-states without any specific head on top.
The LongT5 model was proposed in LongT5: Efficient Text-To-Text Transformer for Long
Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo
Ni, Yun-Hsuan Sung and Yinfei Yang. It’s an encoder-decoder transformer pre-trained in a text-to-text denoising
generative setting. LongT5 model is an extension of T5 model, and it enables using one of the two different
efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LongT5EncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-local-base")
model = LongT5EncoderModel.from_pretrained("google/long-t5-local-base")
input_ids = tokenizer(
... 100 * "Studies have been shown that owning a dog is good for you ", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model(input_ids=input_ids)
last_hidden_states = outputs.last_hidden_state
FlaxLongT5Model
class transformers.FlaxLongT5Model
<
source
>
(
config: LongT5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: Array = None
decoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
LONGT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at LONGT5
Training.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(jnp.ndarray), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(jnp.ndarray)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongT5Config) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxLongT5PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxLongT5Model
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = FlaxLongT5Model.from_pretrained("google/long-t5-local-base")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="np"
... ).input_ids
decoder_input_ids = tokenizer("Studies show that", return_tensors="np").input_ids
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.longt5.configuration_longt5.LongT5Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxLongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = FlaxLongT5ForConditionalGeneration.from_pretrained("google/long-t5-local-base")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For training, decoder_input_ids should be provided.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.longt5.configuration_longt5.LongT5Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxLongT5ForConditionalGeneration
import jax.numpy as jnp
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = FlaxLongT5ForConditionalGeneration.from_pretrained("google/long-t5-local-base")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
FlaxLongT5ForConditionalGeneration
class transformers.FlaxLongT5ForConditionalGeneration
<
source
>
(
config: LongT5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: Array = None
decoder_attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
LONGT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at LONGT5
Training.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
encoder_outputs (tuple(tuple(jnp.ndarray), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(jnp.ndarray)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LongT5Config) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxLongT5PreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxLongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = FlaxLongT5ForConditionalGeneration.from_pretrained("google/long-t5-local-base")
ARTICLE_TO_SUMMARIZE = "summarize: My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], return_tensors="np")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"]).sequences
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. LongT5 is a model with relative position embeddings so
you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a LONGT5
Training.
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.longt5.configuration_longt5.LongT5Config'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxLongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = FlaxLongT5ForConditionalGeneration.from_pretrained("google/long-t5-local-base")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For training, decoder_input_ids should be provided.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.longt5.configuration_longt5.LongT5Config'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, FlaxLongT5ForConditionalGeneration
import jax.numpy as jnp
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = FlaxLongT5ForConditionalGeneration.from_pretrained("google/long-t5-local-base")
text = "summarize: My friends are cool but they eat too many carbs."
inputs = tokenizer(text, return_tensors="np")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
←Longformer
LUKE→
LongT5
Overview
Documentation resources
LongT5Config
LongT5Model
LongT5ForConditionalGeneration
LongT5EncoderModel
FlaxLongT5Model
FlaxLongT5ForConditionalGeneration
|
M-CTC-T
This model is in maintenance mode only, so we won’t accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The M-CTC-T model was proposed in Pseudo-Labeling For Massively Multilingual Speech Recognition by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
The abstract from the paper is the following:
Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual
speech recognition systems. In this work, we extend pseudo-labeling to massively multilingual speech
recognition with 60 languages. We propose a simple pseudo-labeling recipe that works well even
with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised
learning on a target language, generate pseudo-labels for that language, and train a final model using
pseudo-labels for all languages, either from scratch or by fine-tuning. Experiments on the labeled
Common Voice and unlabeled VoxPopuli datasets show that our recipe can yield a model with better
performance for many languages that also transfers well to LibriSpeech.
This model was contributed by cwkeam. The original code can be found here.
Documentation resources
Automatic speech recognition task guide
Tips:
The PyTorch version of this model is only available in torch 1.9 and higher.
MCTCTConfig
class transformers.MCTCTConfig
<
source
>
(
vocab_size = 8065
hidden_size = 1536
num_hidden_layers = 36
intermediate_size = 6144
num_attention_heads = 4
attention_head_dim = 384
max_position_embeddings = 920
layer_norm_eps = 1e-05
layerdrop = 0.3
hidden_act = 'relu'
initializer_range = 0.02
hidden_dropout_prob = 0.3
attention_probs_dropout_prob = 0.3
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
conv_glu_dim = 1
conv_dropout = 0.3
num_conv_layers = 1
conv_kernel = (7,)
conv_stride = (3,)
input_feat_per_channel = 80
input_channels = 1
conv_channels = None
ctc_loss_reduction = 'sum'
ctc_zero_infinity = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 8065) —
Vocabulary size of the M-CTC-T model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling MCTCTModel.
hidden_size (int, optional, defaults to 1536) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 36) —
Number of hidden layers in the Transformer encoder.
intermediate_size (int, optional, defaults to 6144) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_attention_heads (int, optional, defaults to 4) —
Number of attention heads for each attention layer in the Transformer encoder.
attention_head_dim (int, optional, defaults to 384) —
Dimensions of each attention head for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 920) —
The maximum sequence length that this model might ever be used with (after log-mel spectrogram extraction).
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
layerdrop (float, optional, defaults to 0.3) —
The probability of dropping an encoder layer during training. The default 0.3 value is used in the original
implementation.
hidden_act (str or function, optional, defaults to "relu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
pad_token_id (int, optional, defaults to 1) —
The tokenizer index of the pad token.
bos_token_id (int, optional, defaults to 0) —
The tokenizer index of the bos token.
eos_token_id (int, optional, defaults to 2) —
The tokenizer index of the eos token.
conv_glu_dim (int, optional, defaults to 1) —
The dimension of the output of the Conv1dSubsampler layer in which GLU is applied on. Though the original
Flashlight code uses the value of 2, here it’s adapted to 1 due to transposition differences.
conv_dropout (int, optional, defaults to 0.3) —
The probability of randomly dropping the Conv1dSubsampler layer during training.
num_conv_layers (int, optional, defaults to 1) —
Number of convolution layers before applying transformer encoder layers.
conv_kernel (List[int], optional, defaults to [7]) —
The kernel size of the 1D convolution applied before transformer layers. len(conv_kernel) must be equal
to num_conv_layers.
conv_stride (List[int], optional, defaults to [3]) —
The stride length of the 1D convolution applied before transformer layers. len(conv_stride) must be equal
to num_conv_layers.
input_feat_per_channel (int, optional, defaults to 80) —
Feature dimensions of the channels of the input to the Conv1D layer.
input_channels (int, optional, defaults to 1) —
Number of input channels of the input to the Conv1D layer.
conv_channels (List[int], optional, defaults to None) —
Channel sizes of intermediate Conv1D layers.
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of MCTCTForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of MCTCTForCTC.
This is the configuration class to store the configuration of a MCTCTModel. It is used to instantiate an
M-CTC-T model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the M-CTC-T
speechbrain/m-ctc-t-large architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import MCTCTConfig, MCTCTModel
# Initializing a M-CTC-T mctct-large style configuration
configuration = MCTCTConfig()
# Initializing a model (with random weights) from the mctct-large style configuration
model = MCTCTModel(configuration)
# Accessing the model configuration
configuration = model.config
MCTCTFeatureExtractor
class transformers.MCTCTFeatureExtractor
<
source
>
(
feature_size = 80
sampling_rate = 16000
padding_value = 0.0
hop_length = 10
win_length = 25
win_function = 'hamming_window'
frame_signal_scale = 32768.0
preemphasis_coeff = 0.97
mel_floor = 1.0
normalize_means = True
normalize_vars = True
return_attention_mask = False
**kwargs
)
Parameters
feature_size (int, defaults to 80) —
The feature dimension of the extracted features. This is the number of mel_frequency
sampling_rate (int, defaults to 16000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (float, defaults to 0.0) —
The value that is used to fill the padding values.
hop_length (int, defaults to 10) —
Number of audio samples between windows. Otherwise referred to as “shift” in many papers.
win_length (int, defaults to 25) —
Number of ms per window
win_function (str, defaults to "hamming_window") —
Name for the window function used for windowing, must be accessible via torch.{win_function}
frame_signal_scale (float, defaults to 32768.0) —
Constant multiplied in creating the frames before applying DFT.
preemphasis_coeff (float, defaults to 0.97) —
Constant multiplied in applying Pre-emphasis before DFT.
mel_floor (float defaults to 1.0) —
Minimum value of mel frequency banks.
normalize_means (bool, optional, defaults to True) —
Whether or not to zero-mean normalize the extracted features.
normalize_vars (bool, optional, defaults to True) —
Whether or not to unit-variance normalize the extracted features.
Constructs a M-CTC-T feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods. This
code has been adapted from Flashlight’s C++ code. For more information about the implementation, one can refer to
this notebook
that takes the user step-by-step in the implementation.
__call__
<
source
>
(
raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
max_length: typing.Optional[int] = None
truncation: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_attention_mask: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
sampling_rate: typing.Optional[int] = None
**kwargs
)
Parameters
raw_speech (torch.Tensor, np.ndarray, List[float], List[torch.Tensor], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be padded. Each sequence can be a tensor, a numpy array, a list
of float values, a list of tensors, a list of numpy arrays or a list of list of float values. Must be
mono channel audio, not stereo, i.e. single float per timestep.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
max_length (int, optional) —
Maximum length of the returned list and optionally padding length (see above).
truncation (bool) —
Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor’s default.
What are attention masks?
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
sampling_rate (int, optional) —
The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors.
padding_value (float, defaults to 0.0) —
Main method to featurize and prepare for the model one or several sequence(s). sequences. It returns the
log-mel spectrogram of the input audio, as implemented in the original Flashlight MFSC feature extraction code.
MCTCTProcessor
class transformers.MCTCTProcessor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (MCTCTFeatureExtractor) —
An instance of MCTCTFeatureExtractor. The feature extractor is a required input.
tokenizer (AutoTokenizer) —
An instance of AutoTokenizer. The tokenizer is a required input.
Constructs a MCTCT processor which wraps a MCTCT feature extractor and a MCTCT tokenizer into a single processor.
MCTCTProcessor offers all the functionalities of MCTCTFeatureExtractor and AutoTokenizer. See the
call() and decode() for more information.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to MCTCTFeatureExtractor’s
call() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to AutoTokenizer’s
__call__(). Please refer to the doctsring of the above two methods for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to AutoTokenizer’s batch_decode(). Please refer
to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to AutoTokenizer’s decode(). Please refer to the
docstring of this method for more information.
MCTCTModel
class transformers.MCTCTModel
<
source
>
(
config
)
Parameters
config (MCTCTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare M-CTC-T Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_features: Tensor
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using Wav2Vec2CTCTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MCTCTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MCTCTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, MCTCTModel
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("speechbrain/m-ctc-t-large")
model = MCTCTModel.from_pretrained("speechbrain/m-ctc-t-large")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
[1, 195, 1536]
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
MCTCTForCTC
class transformers.MCTCTForCTC
<
source
>
(
config
)
Parameters
config (MCTCTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MCTCT Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_features: Tensor
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_features (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using Wav2Vec2CTCTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MCTCTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MCTCTForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, MCTCTForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("speechbrain/m-ctc-t-large")
model = MCTCTForCTC.from_pretrained("speechbrain/m-ctc-t-large")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
"Mr. Quilter is the apostle of the middle classes, and we're glad to welcome his gospel."
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
1885.65
←Hubert
MMS→
M-CTC-T
Overview
Documentation resources
MCTCTConfig
MCTCTFeatureExtractor
MCTCTProcessor
MCTCTModel
MCTCTForCTC
|
ALIGN
Overview
The ALIGN model was proposed in Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with EfficientNet as its vision encoder and BERT as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe.
The abstract from the paper is the following:
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
Usage
ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score.
AlignProcessor wraps EfficientNetImageProcessor and BertTokenizer into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using AlignProcessor and AlignModel.
Copied
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# this is the image-text similarity score
logits_per_image = outputs.logits_per_image
# we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs)
This model was contributed by Alara Dirik.
The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN.
A blog post on ALIGN and the COYO-700M dataset.
A zero-shot image classification demo.
Model card of kakaobrain/align-base model.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
AlignConfig
class transformers.AlignConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 640
temperature_init_value = 1.0
initializer_range = 0.02
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize AlignTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize AlignVisionConfig.
projection_dim (int, optional, defaults to 640) —
Dimentionality of text and vision projection layers.
temperature_init_value (float, optional, defaults to 1.0) —
The inital value of the temperature paramter. Default is used as per the original ALIGN implementation.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
kwargs (optional) —
Dictionary of keyword arguments.
AlignConfig is the configuration class to store the configuration of a AlignModel. It is used to
instantiate a ALIGN model according to the specified arguments, defining the text model and vision model configs.
Instantiating a configuration with the defaults will yield a similar configuration to that of the ALIGN
kakaobrain/align-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import AlignConfig, AlignModel
# Initializing a AlignConfig with kakaobrain/align-base style configuration
configuration = AlignConfig()
# Initializing a AlignModel (with random weights) from the kakaobrain/align-base style configuration
model = AlignModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a AlignConfig from a AlignTextConfig and a AlignVisionConfig
from transformers import AlignTextConfig, AlignVisionConfig
# Initializing ALIGN Text and Vision configurations
config_text = AlignTextConfig()
config_vision = AlignVisionConfig()
config = AlignConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: AlignTextConfig
vision_config: AlignVisionConfig
**kwargs
)
→
AlignConfig
Returns
AlignConfig
An instance of a configuration object
Instantiate a AlignConfig (or a derived class) from align text model configuration and align vision model
configuration.
AlignTextConfig
class transformers.AlignTextConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Align Text model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling AlignTextModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling AlignTextModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
pad_token_id (int, optional, defaults to 0) —
Padding token id.
This is the configuration class to store the configuration of a AlignTextModel. It is used to instantiate a
ALIGN text encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the text encoder of the ALIGN
kakaobrain/align-base architecture. The default values here are
copied from BERT.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import AlignTextConfig, AlignTextModel
# Initializing a AlignTextConfig with kakaobrain/align-base style configuration
configuration = AlignTextConfig()
# Initializing a AlignTextModel (with random weights) from the kakaobrain/align-base style configuration
model = AlignTextModel(configuration)
# Accessing the model configuration
configuration = model.config
AlignVisionConfig
class transformers.AlignVisionConfig
<
source
>
(
num_channels: int = 3
image_size: int = 600
width_coefficient: float = 2.0
depth_coefficient: float = 3.1
depth_divisor: int = 8
kernel_sizes: typing.List[int] = [3, 3, 5, 3, 5, 5, 3]
in_channels: typing.List[int] = [32, 16, 24, 40, 80, 112, 192]
out_channels: typing.List[int] = [16, 24, 40, 80, 112, 192, 320]
depthwise_padding: typing.List[int] = []
strides: typing.List[int] = [1, 2, 2, 2, 1, 2, 1]
num_block_repeats: typing.List[int] = [1, 2, 2, 3, 3, 4, 1]
expand_ratios: typing.List[int] = [1, 6, 6, 6, 6, 6, 6]
squeeze_expansion_ratio: float = 0.25
hidden_act: str = 'swish'
hidden_dim: int = 2560
pooling_type: str = 'mean'
initializer_range: float = 0.02
batch_norm_eps: float = 0.001
batch_norm_momentum: float = 0.99
drop_connect_rate: float = 0.2
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
image_size (int, optional, defaults to 600) —
The input image size.
width_coefficient (float, optional, defaults to 2.0) —
Scaling coefficient for network width at each stage.
depth_coefficient (float, optional, defaults to 3.1) —
Scaling coefficient for network depth at each stage.
depth_divisor int, optional, defaults to 8) —
A unit of network width.
kernel_sizes (List[int], optional, defaults to [3, 3, 5, 3, 5, 5, 3]) —
List of kernel sizes to be used in each block.
in_channels (List[int], optional, defaults to [32, 16, 24, 40, 80, 112, 192]) —
List of input channel sizes to be used in each block for convolutional layers.
out_channels (List[int], optional, defaults to [16, 24, 40, 80, 112, 192, 320]) —
List of output channel sizes to be used in each block for convolutional layers.
depthwise_padding (List[int], optional, defaults to []) —
List of block indices with square padding.
strides (List[int], optional, defaults to [1, 2, 2, 2, 1, 2, 1]) —
List of stride sizes to be used in each block for convolutional layers.
num_block_repeats (List[int], optional, defaults to [1, 2, 2, 3, 3, 4, 1]) —
List of the number of times each block is to repeated.
expand_ratios (List[int], optional, defaults to [1, 6, 6, 6, 6, 6, 6]) —
List of scaling coefficient of each block.
squeeze_expansion_ratio (float, optional, defaults to 0.25) —
Squeeze expansion ratio.
hidden_act (str or function, optional, defaults to "silu") —
The non-linear activation function (function or string) in each block. If string, "gelu", "relu",
"selu", “gelu_new”, “silu”and“mish”` are supported.
hiddem_dim (int, optional, defaults to 1280) —
The hidden dimension of the layer before the classification head.
pooling_type (str or function, optional, defaults to "mean") —
Type of final pooling to be applied before the dense classification head. Available options are ["mean",
"max"]
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
batch_norm_eps (float, optional, defaults to 1e-3) —
The epsilon used by the batch normalization layers.
batch_norm_momentum (float, optional, defaults to 0.99) —
The momentum used by the batch normalization layers.
drop_connect_rate (float, optional, defaults to 0.2) —
The drop rate for skip connections.
This is the configuration class to store the configuration of a AlignVisionModel. It is used to instantiate a
ALIGN vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the vision encoder of the ALIGN
kakaobrain/align-base architecture. The default values are copied
from EfficientNet (efficientnet-b7)
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import AlignVisionConfig, AlignVisionModel
# Initializing a AlignVisionConfig with kakaobrain/align-base style configuration
configuration = AlignVisionConfig()
# Initializing a AlignVisionModel (with random weights) from the kakaobrain/align-base style configuration
model = AlignVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
AlignProcessor
class transformers.AlignProcessor
<
source
>
(
image_processor
tokenizer
)
Parameters
image_processor (EfficientNetImageProcessor) —
The image processor is a required input.
tokenizer ([BertTokenizer, BertTokenizerFast]) —
The tokenizer is a required input.
Constructs an ALIGN processor which wraps EfficientNetImageProcessor and
BertTokenizer/BertTokenizerFast into a single processor that interits both the image processor and
tokenizer functionalities. See the __call__() and decode() for more
information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
AlignModel
class transformers.AlignModel
<
source
>
(
config: AlignConfig
)
Parameters
config (AlignConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional):
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional):
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
token_type_ids (torch.LongTensor of shape ({0}), optional):
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional):
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional):
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)):
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See EfficientNetImageProcessor.call() for details.
return_loss (bool, optional):
Whether or not to return the contrastive loss.
output_attentions (bool, optional):
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional):
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional):
Whether or not to return a ModelOutput instead of a plain tuple.
The AlignModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of AlignTextModel.
The AlignModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AlignModel
model = AlignModel.from_pretrained("kakaobrain/align-base")
tokenizer = AutoTokenizer.from_pretrained("kakaobrain/align-base")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See EfficientNetImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of AlignVisionModel.
The AlignModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, AlignModel
model = AlignModel.from_pretrained("kakaobrain/align-base")
processor = AutoProcessor.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
AlignTextModel
class transformers.AlignTextModel
<
source
>
(
config: AlignTextConfig
add_pooling_layer: bool = True
)
Parameters
config (AlignConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The text model from ALIGN without any head or projection on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.align.configuration_align.AlignTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The AlignTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AlignTextModel
model = AlignTextModel.from_pretrained("kakaobrain/align-base")
tokenizer = AutoTokenizer.from_pretrained("kakaobrain/align-base")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
AlignVisionModel
class transformers.AlignVisionModel
<
source
>
(
config: AlignVisionConfig
)
Parameters
config (AlignConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The vision model from ALIGN without any head or projection on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See EfficientNetImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.align.configuration_align.AlignVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The AlignVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, AlignVisionModel
model = AlignVisionModel.from_pretrained("kakaobrain/align-base")
processor = AutoProcessor.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
←XLSR-Wav2Vec2
AltCLIP→
ALIGN
Overview
Usage
Resources
AlignConfig
AlignTextConfig
AlignVisionConfig
AlignProcessor
AlignModel
AlignTextModel
AlignVisionModel
|
BORT
This model is in maintenance mode only, so we won’t accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The BORT model was proposed in Optimal Subarchitecture Extraction for BERT by
Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the
authors refer to as “Bort”.
The abstract from the paper is the following:
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by
applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as
“Bort”, is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the
original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which
is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large
(Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same
hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the
architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%,
absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
Tips:
BORT’s model architecture is based on BERT, so one can refer to BERT’s documentation page for the
model’s API as well as usage examples.
BORT uses the RoBERTa tokenizer instead of the BERT tokenizer, so one can refer to RoBERTa’s documentation page for the tokenizer’s API as well as usage examples.
BORT requires a specific fine-tuning algorithm, called Agora ,
that is sadly not open-sourced yet. It would be very useful for the community, if someone tries to implement the
algorithm to make BORT fine-tuning work.
This model was contributed by stefan-it. The original code can be found here.
←BLOOM
ByT5→
BORT
Overview
|
Convolutional Vision Transformer (CvT)
Overview
The CvT model was proposed in CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs.
The abstract from the paper is the following:
We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT)
in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through
two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer
block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs)
to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention,
global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves
state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition,
performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on
ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding,
a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks.
Tips:
CvT models are regular Vision Transformers, but trained with convolutions. They outperform the original model (ViT) when fine-tuned on ImageNet-1K and CIFAR-100.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace ViTFeatureExtractor by AutoImageProcessor and ViTForImageClassification by CvtForImageClassification).
The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
This model was contributed by anugunj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT.
Image Classification
CvtForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
CvtConfig
class transformers.CvtConfig
<
source
>
(
num_channels = 3
patch_sizes = [7, 3, 3]
patch_stride = [4, 2, 2]
patch_padding = [2, 1, 1]
embed_dim = [64, 192, 384]
num_heads = [1, 3, 6]
depth = [1, 2, 10]
mlp_ratio = [4.0, 4.0, 4.0]
attention_drop_rate = [0.0, 0.0, 0.0]
drop_rate = [0.0, 0.0, 0.0]
drop_path_rate = [0.0, 0.0, 0.1]
qkv_bias = [True, True, True]
cls_token = [False, False, True]
qkv_projection_method = ['dw_bn', 'dw_bn', 'dw_bn']
kernel_qkv = [3, 3, 3]
padding_kv = [1, 1, 1]
stride_kv = [2, 2, 2]
padding_q = [1, 1, 1]
stride_q = [1, 1, 1]
initializer_range = 0.02
layer_norm_eps = 1e-12
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
patch_sizes (List[int], optional, defaults to [7, 3, 3]) —
The kernel size of each encoder’s patch embedding.
patch_stride (List[int], optional, defaults to [4, 2, 2]) —
The stride size of each encoder’s patch embedding.
patch_padding (List[int], optional, defaults to [2, 1, 1]) —
The padding size of each encoder’s patch embedding.
embed_dim (List[int], optional, defaults to [64, 192, 384]) —
Dimension of each of the encoder blocks.
num_heads (List[int], optional, defaults to [1, 3, 6]) —
Number of attention heads for each attention layer in each block of the Transformer encoder.
depth (List[int], optional, defaults to [1, 2, 10]) —
The number of layers in each encoder block.
mlp_ratios (List[float], optional, defaults to [4.0, 4.0, 4.0, 4.0]) —
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
attention_drop_rate (List[float], optional, defaults to [0.0, 0.0, 0.0]) —
The dropout ratio for the attention probabilities.
drop_rate (List[float], optional, defaults to [0.0, 0.0, 0.0]) —
The dropout ratio for the patch embeddings probabilities.
drop_path_rate (List[float], optional, defaults to [0.0, 0.0, 0.1]) —
The dropout probability for stochastic depth, used in the blocks of the Transformer encoder.
qkv_bias (List[bool], optional, defaults to [True, True, True]) —
The bias bool for query, key and value in attentions
cls_token (List[bool], optional, defaults to [False, False, True]) —
Whether or not to add a classification token to the output of each of the last 3 stages.
qkv_projection_method (List[string], optional, defaults to [“dw_bn”, “dw_bn”, “dw_bn”]`) —
The projection method for query, key and value Default is depth-wise convolutions with batch norm. For
Linear projection use “avg”.
kernel_qkv (List[int], optional, defaults to [3, 3, 3]) —
The kernel size for query, key and value in attention layer
padding_kv (List[int], optional, defaults to [1, 1, 1]) —
The padding size for key and value in attention layer
stride_kv (List[int], optional, defaults to [2, 2, 2]) —
The stride size for key and value in attention layer
padding_q (List[int], optional, defaults to [1, 1, 1]) —
The padding size for query in attention layer
stride_q (List[int], optional, defaults to [1, 1, 1]) —
The stride size for query in attention layer
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a CvtModel. It is used to instantiate a CvT model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the CvT
microsoft/cvt-13 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CvtConfig, CvtModel
# Initializing a Cvt msft/cvt style configuration
configuration = CvtConfig()
# Initializing a model (with random weights) from the msft/cvt style configuration
model = CvtModel(configuration)
# Accessing the model configuration
configuration = model.config
CvtModel
class transformers.CvtModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (CvtConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Cvt Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.cvt.modeling_cvt.BaseModelOutputWithCLSToken or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.cvt.modeling_cvt.BaseModelOutputWithCLSToken or tuple(torch.FloatTensor)
A transformers.models.cvt.modeling_cvt.BaseModelOutputWithCLSToken or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CvtConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
cls_token_value (torch.FloatTensor of shape (batch_size, 1, hidden_size)) — Classification token at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
The CvtModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, CvtModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13")
model = CvtModel.from_pretrained("microsoft/cvt-13")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 384, 14, 14]
CvtForImageClassification
class transformers.CvtForImageClassification
<
source
>
(
config
)
Parameters
config (CvtConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CvtConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The CvtForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, CvtForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13")
model = CvtForImageClassification.from_pretrained("microsoft/cvt-13")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFCvtModel
class transformers.TFCvtModel
<
source
>
(
*args
**kwargs
)
Parameters
config (CvtConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Cvt Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all the
tensors in the first argument of the model call function: model(inputs).
call
<
source
>
(
pixel_values: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.models.cvt.modeling_tf_cvt.TFBaseModelOutputWithCLSToken or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.cvt.modeling_tf_cvt.TFBaseModelOutputWithCLSToken or tuple(tf.Tensor)
A transformers.models.cvt.modeling_tf_cvt.TFBaseModelOutputWithCLSToken or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CvtConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
cls_token_value (tf.Tensor of shape (batch_size, 1, hidden_size)) — Classification token at the output of the last layer of the model.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
The TFCvtModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFCvtModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13")
model = TFCvtModel.from_pretrained("microsoft/cvt-13")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
TFCvtForImageClassification
class transformers.TFCvtForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (CvtConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using tf.keras.Model.fit method which currently requires having all the
tensors in the first argument of the model call function: model(inputs).
call
<
source
>
(
pixel_values: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__
for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CvtConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called
feature maps) of the model at the output of each stage.
The TFCvtForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFCvtForImageClassification
import tensorflow as tf
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13")
model = TFCvtForImageClassification.from_pretrained("microsoft/cvt-13")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = tf.math.argmax(logits, axis=-1)[0]
print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
←ConvNeXTV2
Deformable DETR→
Convolutional Vision Transformer (CvT)
Overview
Resources
CvtConfig
CvtModel
CvtForImageClassification
TFCvtModel
TFCvtForImageClassification
|
DPR
Overview
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was
introduced in Dense Passage Retrieval for Open-Domain Question Answering by
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih.
The abstract from the paper is the following:
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional
sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can
be practically implemented using dense representations alone, where embeddings are learned from a small number of
questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets,
our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage
retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA
benchmarks.
This model was contributed by lhoestq. The original code can be found here.
Tips:
DPR consists in three models:
Question encoder: encode questions as vectors
Context encoder: encode contexts as vectors
Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question).
DPRConfig
class transformers.DPRConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
projection_dim: int = 0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the DPR model. Defines the different tokens that can be represented by the inputs_ids
passed to the forward method of BertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed into BertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
projection_dim (int, optional, defaults to 0) —
Dimension of the projection for the context and question encoders. If it is set to zero (default), then no
projection is done.
DPRConfig is the configuration class to store the configuration of a DPRModel.
This is the configuration class to store the configuration of a DPRContextEncoder, DPRQuestionEncoder, or a
DPRReader. It is used to instantiate the components of the DPR model according to the specified arguments,
defining the model component architectures. Instantiating a configuration with the defaults will yield a similar
configuration to that of the DPRContextEncoder
facebook/dpr-ctx_encoder-single-nq-base
architecture.
This class is a subclass of BertConfig. Please check the superclass for the documentation of all kwargs.
Example:
Copied
from transformers import DPRConfig, DPRContextEncoder
# Initializing a DPR facebook/dpr-ctx_encoder-single-nq-base style configuration
configuration = DPRConfig()
# Initializing a model (with random weights) from the facebook/dpr-ctx_encoder-single-nq-base style configuration
model = DPRContextEncoder(configuration)
# Accessing the model configuration
configuration = model.config
DPRContextEncoderTokenizer
class transformers.DPRContextEncoderTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Construct a DPRContextEncoder tokenizer.
DPRContextEncoderTokenizer is identical to BertTokenizer and runs end-to-end tokenization: punctuation
splitting and wordpiece.
Refer to superclass BertTokenizer for usage examples and documentation concerning parameters.
DPRContextEncoderTokenizerFast
class transformers.DPRContextEncoderTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Construct a “fast” DPRContextEncoder tokenizer (backed by HuggingFace’s tokenizers library).
DPRContextEncoderTokenizerFast is identical to BertTokenizerFast and runs end-to-end tokenization:
punctuation splitting and wordpiece.
Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters.
DPRQuestionEncoderTokenizer
class transformers.DPRQuestionEncoderTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Constructs a DPRQuestionEncoder tokenizer.
DPRQuestionEncoderTokenizer is identical to BertTokenizer and runs end-to-end tokenization: punctuation
splitting and wordpiece.
Refer to superclass BertTokenizer for usage examples and documentation concerning parameters.
DPRQuestionEncoderTokenizerFast
class transformers.DPRQuestionEncoderTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Constructs a “fast” DPRQuestionEncoder tokenizer (backed by HuggingFace’s tokenizers library).
DPRQuestionEncoderTokenizerFast is identical to BertTokenizerFast and runs end-to-end tokenization:
punctuation splitting and wordpiece.
Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters.
DPRReaderTokenizer
class transformers.DPRReaderTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
→
Dict[str, List[List[int]]]
Parameters
questions (str or List[str]) —
The questions to be encoded. You can specify one question for many passages. In this case, the question
will be duplicated like [questions] * n_passages. Otherwise you have to specify as many questions as in
titles or texts.
titles (str or List[str]) —
The passages titles to be encoded. This can be a string or a list of strings if there are several passages.
texts (str or List[str]) —
The passages texts to be encoded. This can be a string or a list of strings if there are several passages.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence
if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to
the maximum acceptable input length for the model if that argument is not provided. This will truncate
token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch
of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided. This will only truncate the first
sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided. This will only truncate the
second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_attention_mask (bool, optional) —
Whether or not to return the attention mask. If not set, will return the attention mask according to the
specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
Returns
Dict[str, List[List[int]]]
A dictionary with the following keys:
input_ids: List of token ids to be fed to a model.
attention_mask: List of indices specifying which tokens should be attended to by the model.
Construct a DPRReader tokenizer.
DPRReaderTokenizer is almost identical to BertTokenizer and runs end-to-end tokenization: punctuation
splitting and wordpiece. The difference is that is has three inputs strings: question, titles and texts that are
combined to be fed to the DPRReader model.
Refer to superclass BertTokenizer for usage examples and documentation concerning parameters.
Return a dictionary with the token ids of the input strings and other information to give to .decode_best_spans.
It converts the strings of a question and different passages (title and text) in a sequence of IDs (integers),
using the tokenizer and vocabulary. The resulting input_ids is a matrix of size (n_passages, sequence_length)
with the format:
Copied
[CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids>
DPRReaderTokenizerFast
class transformers.DPRReaderTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
→
Dict[str, List[List[int]]]
Parameters
questions (str or List[str]) —
The questions to be encoded. You can specify one question for many passages. In this case, the question
will be duplicated like [questions] * n_passages. Otherwise you have to specify as many questions as in
titles or texts.
titles (str or List[str]) —
The passages titles to be encoded. This can be a string or a list of strings if there are several passages.
texts (str or List[str]) —
The passages texts to be encoded. This can be a string or a list of strings if there are several passages.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence
if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to
the maximum acceptable input length for the model if that argument is not provided. This will truncate
token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch
of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided. This will only truncate the first
sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided. This will only truncate the
second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_attention_mask (bool, optional) —
Whether or not to return the attention mask. If not set, will return the attention mask according to the
specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
Returns
Dict[str, List[List[int]]]
A dictionary with the following keys:
input_ids: List of token ids to be fed to a model.
attention_mask: List of indices specifying which tokens should be attended to by the model.
Constructs a “fast” DPRReader tokenizer (backed by HuggingFace’s tokenizers library).
DPRReaderTokenizerFast is almost identical to BertTokenizerFast and runs end-to-end tokenization:
punctuation splitting and wordpiece. The difference is that is has three inputs strings: question, titles and texts
that are combined to be fed to the DPRReader model.
Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters.
Return a dictionary with the token ids of the input strings and other information to give to .decode_best_spans.
It converts the strings of a question and different passages (title and text) in a sequence of IDs (integers),
using the tokenizer and vocabulary. The resulting input_ids is a matrix of size (n_passages, sequence_length)
with the format:
[CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids>
DPR specific outputs
class transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput
<
source
>
(
pooler_output: FloatTensor
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) —
The DPR encoder outputs the pooler_output that corresponds to the context representation. Last layer
hidden-state of the first token of the sequence (classification token) further processed by a Linear layer.
This output is to be used to embed contexts for nearest neighbors queries with questions embeddings.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Class for outputs of DPRQuestionEncoder.
class transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput
<
source
>
(
pooler_output: FloatTensor
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) —
The DPR encoder outputs the pooler_output that corresponds to the question representation. Last layer
hidden-state of the first token of the sequence (classification token) further processed by a Linear layer.
This output is to be used to embed questions for nearest neighbors queries with context embeddings.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Class for outputs of DPRQuestionEncoder.
class transformers.DPRReaderOutput
<
source
>
(
start_logits: FloatTensor
end_logits: FloatTensor = None
relevance_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
start_logits (torch.FloatTensor of shape (n_passages, sequence_length)) —
Logits of the start index of the span for each passage.
end_logits (torch.FloatTensor of shape (n_passages, sequence_length)) —
Logits of the end index of the span for each passage.
relevance_logits (torch.FloatTensor of shape (n_passages, )) —
Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the
question, compared to all the other passages.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Class for outputs of DPRQuestionEncoder.
DPRContextEncoder
class transformers.DPRContextEncoder
<
source
>
(
config: DPRConfig
)
Parameters
config (DPRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DPRContextEncoder transformer outputting pooler outputs as context representations.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be
formatted with [CLS] and [SEP] tokens as follows:
(a) For sequence pairs (for a pair title+text for example):
Returns
transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput or tuple(torch.FloatTensor)
A transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DPRConfig) and inputs.
pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the context representation. Last layer
hidden-state of the first token of the sequence (classification token) further processed by a Linear layer.
This output is to be used to embed contexts for nearest neighbors queries with questions embeddings.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DPRContextEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
DPRQuestionEncoder
class transformers.DPRQuestionEncoder
<
source
>
(
config: DPRConfig
)
Parameters
config (DPRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DPRQuestionEncoder transformer outputting pooler outputs as question representations.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be
formatted with [CLS] and [SEP] tokens as follows:
(a) For sequence pairs (for a pair title+text for example):
Returns
transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput or tuple(torch.FloatTensor)
A transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DPRConfig) and inputs.
pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the question representation. Last layer
hidden-state of the first token of the sequence (classification token) further processed by a Linear layer.
This output is to be used to embed questions for nearest neighbors queries with context embeddings.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DPRQuestionEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
model = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
DPRReader
class transformers.DPRReader
<
source
>
(
config: DPRConfig
)
Parameters
config (DPRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DPRReader transformer outputting span predictions.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: bool = None
output_hidden_states: bool = None
return_dict = None
)
→
transformers.models.dpr.modeling_dpr.DPRReaderOutput or tuple(torch.FloatTensor)
Parameters
input_ids (Tuple[torch.LongTensor] of shapes (n_passages, sequence_length)) —
Indices of input sequence tokens in the vocabulary. It has to be a sequence triplet with 1) the question
and 2) the passages titles and 3) the passages texts To match pretraining, DPR input_ids sequence should
be formatted with [CLS] and [SEP] with the format:
[CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids>
DPR is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right
rather than the left.
Indices can be obtained using DPRReaderTokenizer. See this class documentation for more details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (n_passages, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
inputs_embeds (torch.FloatTensor of shape (n_passages, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.dpr.modeling_dpr.DPRReaderOutput or tuple(torch.FloatTensor)
A transformers.models.dpr.modeling_dpr.DPRReaderOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DPRConfig) and inputs.
start_logits (torch.FloatTensor of shape (n_passages, sequence_length)) — Logits of the start index of the span for each passage.
end_logits (torch.FloatTensor of shape (n_passages, sequence_length)) — Logits of the end index of the span for each passage.
relevance_logits (torch.FloatTensor of shape (n_passages, )) — Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the
question, compared to all the other passages.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DPRReader forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base")
model = DPRReader.from_pretrained("facebook/dpr-reader-single-nq-base")
encoded_inputs = tokenizer(
... questions=["What is love ?"],
... titles=["Haddaway"],
... texts=["'What Is Love' is a song recorded by the artist Haddaway"],
... return_tensors="pt",
... )
outputs = model(**encoded_inputs)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
TFDPRContextEncoder
class transformers.TFDPRContextEncoder
<
source
>
(
*args
**kwargs
)
Parameters
config (DPRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DPRContextEncoder transformer outputting pooler outputs as context representations.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Tensorflow tf.keras.Model
subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to
general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids = None
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions = None
output_hidden_states = None
return_dict = None
training: bool = False
)
→
transformers.models.dpr.modeling_tf_dpr.TFDPRContextEncoderOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be
formatted with [CLS] and [SEP] tokens as follows:
(a) For sequence pairs (for a pair title+text for example):
Returns
transformers.models.dpr.modeling_tf_dpr.TFDPRContextEncoderOutput or tuple(tf.Tensor)
A transformers.models.dpr.modeling_tf_dpr.TFDPRContextEncoderOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DPRConfig) and inputs.
pooler_output (tf.Tensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the context representation. Last layer
hidden-state of the first token of the sequence (classification token) further processed by a Linear layer.
This output is to be used to embed contexts for nearest neighbors queries with questions embeddings.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDPRContextEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TFDPRContextEncoder, DPRContextEncoderTokenizer
tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
model = TFDPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", from_pt=True)
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="tf")["input_ids"]
embeddings = model(input_ids).pooler_output
TFDPRQuestionEncoder
class transformers.TFDPRQuestionEncoder
<
source
>
(
*args
**kwargs
)
Parameters
config (DPRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DPRQuestionEncoder transformer outputting pooler outputs as question representations.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Tensorflow tf.keras.Model
subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to
general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids = None
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions = None
output_hidden_states = None
return_dict = None
training: bool = False
)
→
transformers.models.dpr.modeling_tf_dpr.TFDPRQuestionEncoderOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be
formatted with [CLS] and [SEP] tokens as follows:
(a) For sequence pairs (for a pair title+text for example):
Returns
transformers.models.dpr.modeling_tf_dpr.TFDPRQuestionEncoderOutput or tuple(tf.Tensor)
A transformers.models.dpr.modeling_tf_dpr.TFDPRQuestionEncoderOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DPRConfig) and inputs.
pooler_output (tf.Tensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the question representation. Last layer
hidden-state of the first token of the sequence (classification token) further processed by a Linear layer.
This output is to be used to embed questions for nearest neighbors queries with context embeddings.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDPRQuestionEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TFDPRQuestionEncoder, DPRQuestionEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
model = TFDPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base", from_pt=True)
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="tf")["input_ids"]
embeddings = model(input_ids).pooler_output
TFDPRReader
class transformers.TFDPRReader
<
source
>
(
*args
**kwargs
)
Parameters
config (DPRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DPRReader transformer outputting span predictions.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Tensorflow tf.keras.Model
subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to
general usage and behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids = None
attention_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: bool = None
output_hidden_states: bool = None
return_dict = None
training: bool = False
)
→
transformers.models.dpr.modeling_tf_dpr.TFDPRReaderOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shapes (n_passages, sequence_length)) —
Indices of input sequence tokens in the vocabulary. It has to be a sequence triplet with 1) the question
and 2) the passages titles and 3) the passages texts To match pretraining, DPR input_ids sequence should
be formatted with [CLS] and [SEP] with the format:
[CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids>
DPR is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right
rather than the left.
Indices can be obtained using DPRReaderTokenizer. See this class documentation for more details.
attention_mask (Numpy array or tf.Tensor of shape (n_passages, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
inputs_embeds (Numpy array or tf.Tensor of shape (n_passages, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.dpr.modeling_tf_dpr.TFDPRReaderOutput or tuple(tf.Tensor)
A transformers.models.dpr.modeling_tf_dpr.TFDPRReaderOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DPRConfig) and inputs.
start_logits (tf.Tensor of shape (n_passages, sequence_length)) — Logits of the start index of the span for each passage.
end_logits (tf.Tensor of shape (n_passages, sequence_length)) — Logits of the end index of the span for each passage.
relevance_logits (tf.Tensor of shape (n_passages, )) — Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the
question, compared to all the other passages.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDPRReader forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TFDPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base")
model = TFDPRReader.from_pretrained("facebook/dpr-reader-single-nq-base", from_pt=True)
encoded_inputs = tokenizer(
... questions=["What is love ?"],
... titles=["Haddaway"],
... texts=["'What Is Love' is a song recorded by the artist Haddaway"],
... return_tensors="tf",
... )
outputs = model(encoded_inputs)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
←DistilBERT
ELECTRA→
DPR
Overview
DPRConfig
DPRContextEncoderTokenizer
DPRContextEncoderTokenizerFast
DPRQuestionEncoderTokenizer
DPRQuestionEncoderTokenizerFast
DPRReaderTokenizer
DPRReaderTokenizerFast
DPR specific outputs
DPRContextEncoder
DPRQuestionEncoder
DPRReader
TFDPRContextEncoder
TFDPRQuestionEncoder
TFDPRReader
|
BridgeTower
Overview
The BridgeTower model was proposed in BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a
bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs.
This paper has been accepted to the AAAI’23 conference.
The abstract from the paper is the following:
Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years.
Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder.
Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder.
This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks.
In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs.
Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
BridgeTower architecture. Taken from the original paper.
Usage
BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers.
The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder.
In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture.
The BridgeTowerProcessor wraps RobertaTokenizer and BridgeTowerImageProcessor into a single instance to both
encode the text and prepare the images respectively.
The following example shows how to run contrastive learning using BridgeTowerProcessor and BridgeTowerForContrastiveLearning.
Copied
from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
# forward pass
scores = dict()
for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs
The following example shows how to run image-text retrieval using BridgeTowerProcessor and BridgeTowerForImageAndTextRetrieval.
Copied
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# forward pass
scores = dict()
for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs.logits[0, 1].item()
The following example shows how to run masked language modeling using BridgeTowerProcessor and BridgeTowerForMaskedLM.
Copied
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a <mask> looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
.a cat looking out of the window.
This model was contributed by Anahita Bhiwandiwalla, Tiep Le and Shaoyen Tseng. The original code can be found here.
Tips:
This implementation of BridgeTower uses RobertaTokenizer to generate text embeddings and OpenAI’s CLIP/ViT model to compute visual embeddings.
Checkpoints for pre-trained bridgeTower-base and bridgetower masked language modeling and image text matching are released.
Please refer to Table 5 for BridgeTower’s performance on Image Retrieval and other down stream tasks.
The PyTorch version of this model is only available in torch 1.10 and higher.
BridgeTowerConfig
class transformers.BridgeTowerConfig
<
source
>
(
share_cross_modal_transformer_layers = True
hidden_act = 'gelu'
hidden_size = 768
initializer_factor = 1
layer_norm_eps = 1e-05
share_link_tower_layers = False
link_tower_type = 'add'
num_attention_heads = 12
num_hidden_layers = 6
tie_word_embeddings = False
init_layernorm_from_vision_encoder = False
text_config = None
vision_config = None
**kwargs
)
Parameters
share_cross_modal_transformer_layers (bool, optional, defaults to True) —
Whether cross modal transformer layers are shared.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (float, optional, defaults to 1e-05) —
The epsilon used by the layer normalization layers.
share_link_tower_layers (bool, optional, defaults to False) —
Whether the bride/link tower layers are shared.
link_tower_type (str, optional, defaults to "add") —
Type of the bridge/link layer.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 6) —
Number of hidden layers in the Transformer encoder.
tie_word_embeddings (bool, optional, defaults to False) —
Whether to tie input and output embeddings.
init_layernorm_from_vision_encoder (bool, optional, defaults to False) —
Whether to init LayerNorm from the vision encoder.
text_config (dict, optional) —
Dictionary of configuration options used to initialize BridgeTowerTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize BridgeTowerVisionConfig.
This is the configuration class to store the configuration of a BridgeTowerModel. It is used to instantiate a
BridgeTower model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the bridgetower-base
BridgeTower/bridgetower-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BridgeTowerModel, BridgeTowerConfig
# Initializing a BridgeTower BridgeTower/bridgetower-base style configuration
configuration = BridgeTowerConfig()
# Initializing a model from the BridgeTower/bridgetower-base style configuration
model = BridgeTowerModel(configuration)
# Accessing the model configuration
configuration = model.config
from_text_vision_configs
<
source
>
(
text_config: BridgeTowerTextConfig
vision_config: BridgeTowerVisionConfig
**kwargs
)
Instantiate a BridgeTowerConfig (or a derived class) from BridgeTower text model configuration. Returns:
BridgeTowerConfig: An instance of a configuration object
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
BridgeTowerTextConfig
class transformers.BridgeTowerTextConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
initializer_factor = 1
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 514
type_vocab_size = 1
layer_norm_eps = 1e-05
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
position_embedding_type = 'absolute'
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the text part of the model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling BridgeTowerModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 514) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (float, optional, defaults to 1e-05) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
This is the configuration class to store the text configuration of a BridgeTowerModel. The default values here
are copied from RoBERTa. Instantiating a configuration with the defaults will yield a similar configuration to that
of the bridgetower-base BridegTower/bridgetower-base
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BridgeTowerTextConfig
# Initializing a BridgeTower BridgeTower/bridgetower-base style configuration for the text model
configuration = BridgeTowerTextConfig()
# Accessing the configuration
configuration
BridgeTowerVisionConfig
class transformers.BridgeTowerVisionConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_channels = 3
patch_size = 16
image_size = 288
initializer_factor = 1
layer_norm_eps = 1e-05
stop_gradient = False
share_layernorm = True
remove_last_layer = False
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in visual encoder model.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
image_size (int, optional, defaults to 288) —
The size (resolution) of each image.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
layer_norm_eps (float, optional, defaults to 1e-05) —
The epsilon used by the layer normalization layers.
stop_gradient (bool, optional, defaults to False) —
Whether to stop gradient for training.
share_layernorm (bool, optional, defaults to True) —
Whether LayerNorm layers are shared.
remove_last_layer (bool, optional, defaults to False) —
Whether to remove the last layer from the vision encoder.
This is the configuration class to store the vision configuration of a BridgeTowerModel. Instantiating a
configuration with the defaults will yield a similar configuration to that of the bridgetower-base
BridgeTower/bridgetower-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BridgeTowerVisionConfig
# Initializing a BridgeTower BridgeTower/bridgetower-base style configuration for the vision model
configuration = BridgeTowerVisionConfig()
# Accessing the configuration
configuration
BridgeTowerImageProcessor
class transformers.BridgeTowerImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = 288
size_divisor: int = 32
resample: Resampling = <Resampling.BICUBIC: 3>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_center_crop: bool = True
do_pad: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to 288) —
Resize the shorter side of the input to size["shortest_edge"]. The longer side will be limited to under
int((1333 / 800) * size["shortest_edge"]) while preserving the aspect ratio. Only has an effect if
do_resize is set to True. Can be overridden by the size parameter in the preprocess method.
size_divisor (int, optional, defaults to 32) —
The size by which to make sure both the height and width can be divided. Only has an effect if do_resize
is set to True. Can be overridden by the size_divisor parameter in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. Can be
overridden by the resample parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Only has an effect if do_rescale is set to True. Can be
overridden by the rescale_factor parameter in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method. Can be overridden by the do_normalize parameter in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method. Can be
overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Can be overridden by the image_std parameter in the preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image. Can be overridden by the do_center_crop parameter in the preprocess
method.
do_pad (bool, optional, defaults to True) —
Whether to pad the image to the (max_height, max_width) of the images in the batch. Can be overridden by
the do_pad parameter in the preprocess method.
Constructs a BridgeTower image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
size_divisor: typing.Optional[int] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
do_center_crop: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Controls the size of the image after resize. The shortest edge of the image is resized to
size["shortest_edge"] whilst preserving the aspect ratio. If the longest edge of this resized image
is > int(size["shortest_edge"] * (1333 / 800)), then the image is resized again to make the longest
edge equal to int(size["shortest_edge"] * (1333 / 800)).
size_divisor (int, optional, defaults to self.size_divisor) —
The image is resized to a size that is a multiple of this value.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to normalize the image by if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to normalize the image by if do_normalize is set to True.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image to the (max_height, max_width) in the batch. If True, a pixel mask is also
created and returned.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the
image is padded with 0’s and then center cropped.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
BridgeTowerProcessor
class transformers.BridgeTowerProcessor
<
source
>
(
image_processor
tokenizer
)
Parameters
image_processor (BridgeTowerImageProcessor) —
An instance of BridgeTowerImageProcessor. The image processor is a required input.
tokenizer (RobertaTokenizerFast) —
An instance of [‘RobertaTokenizerFast`]. The tokenizer is a required input.
Constructs a BridgeTower processor which wraps a Roberta tokenizer and BridgeTower image processor into a single
processor.
BridgeTowerProcessor offers all the functionalities of BridgeTowerImageProcessor and
RobertaTokenizerFast. See the docstring of call() and
decode() for more information.
__call__
<
source
>
(
images
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
This method uses BridgeTowerImageProcessor.call() method to prepare image(s) for the model, and
RobertaTokenizerFast.call() to prepare text for the model.
Please refer to the docstring of the above two methods for more information.
BridgeTowerModel
class transformers.BridgeTowerModel
<
source
>
(
config
)
Parameters
config (BridgeTowerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BridgeTower Model transformer outputting BridgeTowerModelOutput object without any specific head on top.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
image_token_type_idx: typing.Optional[int] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.models.bridgetower.modeling_bridgetower.BridgeTowerModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See
BridgeTowerImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
image_token_type_idx (int, optional) —
The token type ids for images.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
output_hidden_states (bool, optional) —
If set to True, hidden states are returned as a list containing the hidden states of text, image, and
cross-modal components respectively. i.e. (hidden_states_text, hidden_states_image, hidden_states_cross_modal) where each element is a list of the hidden states of the corresponding
modality. hidden_states_txt/img are a list of tensors corresponding to unimodal hidden states and
hidden_states_cross_modal is a list of tuples containing cross_modal_text_hidden_states and
cross_modal_image_hidden_states of each brdige layer.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels are currently not supported.
Returns
transformers.models.bridgetower.modeling_bridgetower.BridgeTowerModelOutput or tuple(torch.FloatTensor)
A transformers.models.bridgetower.modeling_bridgetower.BridgeTowerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BridgeTowerConfig) and inputs.
text_features (torch.FloatTensor of shape (batch_size, text_sequence_length, hidden_size)) — Sequence of hidden-states at the text output of the last layer of the model.
image_features (torch.FloatTensor of shape (batch_size, image_sequence_length, hidden_size)) — Sequence of hidden-states at the image output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size x 2)) — Concatenation of last layer hidden-state of the first token of the text and image sequence (classification
token), respectively, after further processing through layers used for auxiliary pretraining tasks.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of
the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BridgeTowerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import BridgeTowerProcessor, BridgeTowerModel
from PIL import Image
import requests
# prepare image and text
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "hello world"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base")
model = BridgeTowerModel.from_pretrained("BridgeTower/bridgetower-base")
inputs = processor(image, text, return_tensors="pt")
outputs = model(**inputs)
outputs.keys()
odict_keys(['text_features', 'image_features', 'pooler_output'])
BridgeTowerForContrastiveLearning
class transformers.BridgeTowerForContrastiveLearning
<
source
>
(
config
)
Parameters
config (BridgeTowerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BridgeTower Model with a image-text contrastive head on top computing image-text contrastive loss.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = True
return_dict: typing.Optional[bool] = None
return_loss: typing.Optional[bool] = None
)
→
transformers.models.bridgetower.modeling_bridgetower.BridgeTowerContrastiveOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See
BridgeTowerImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
image_token_type_idx (int, optional) —
The token type ids for images.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
Returns
transformers.models.bridgetower.modeling_bridgetower.BridgeTowerContrastiveOutput or tuple(torch.FloatTensor)
A transformers.models.bridgetower.modeling_bridgetower.BridgeTowerContrastiveOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BridgeTowerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True — Image-text contrastive loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
text_embeds (torch.FloatTensor), optional, returned when model is initialized with with_projection=True) — The text embeddings obtained by applying the projection layer to the pooler_output.
image_embeds (torch.FloatTensor), optional, returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
cross_embeds (torch.FloatTensor), optional, returned when model is initialized with with_projection=True) — The text-image cross-modal embeddings obtained by applying the projection layer to the pooler_output.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of
the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
The BridgeTowerForContrastiveLearning forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
import requests
from PIL import Image
import torch
image_urls = [
... "https://farm4.staticflickr.com/3395/3428278415_81c3e27f15_z.jpg",
... "http://images.cocodataset.org/val2017/000000039769.jpg",
... ]
texts = ["two dogs in a car", "two cats sleeping on a couch"]
images = [Image.open(requests.get(url, stream=True).raw) for url in image_urls]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
inputs = processor(images, texts, padding=True, return_tensors="pt")
loss = model(**inputs, return_loss=True).loss
inputs = processor(images, texts[::-1], padding=True, return_tensors="pt")
loss_swapped = model(**inputs, return_loss=True).loss
print("Loss", round(loss.item(), 4))
Loss 0.0019
print("Loss with swapped images", round(loss_swapped.item(), 4))
Loss with swapped images 2.126
BridgeTowerForMaskedLM
class transformers.BridgeTowerForMaskedLM
<
source
>
(
config
)
Parameters
config (BridgeTowerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BridgeTower Model with a language modeling head on top as done during pretraining.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See
BridgeTowerImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
image_token_type_idx (int, optional) —
The token type ids for images.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BridgeTowerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BridgeTowerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a <mask> looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
.a cat looking out of the window.
BridgeTowerForImageAndTextRetrieval
class transformers.BridgeTowerForImageAndTextRetrieval
<
source
>
(
config
)
Parameters
config (BridgeTowerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BridgeTower Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the
[CLS] token) for image-to-text matching.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See
BridgeTowerImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
image_token_type_idx (int, optional) —
The token type ids for images.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, 1), optional) —
Labels for computing the image-text matching loss. 0 means the pairs don’t match and 1 means they match.
The pairs with 0 will be skipped for calculation.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BridgeTowerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BridgeTowerForImageAndTextRetrieval forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm")
# forward pass
scores = dict()
for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs.logits[0, 1].item()
←BLIP-2
Chinese-CLIP→
BridgeTower
Overview
Usage
BridgeTowerConfig
BridgeTowerTextConfig
BridgeTowerVisionConfig
BridgeTowerImageProcessor
BridgeTowerProcessor
BridgeTowerModel
BridgeTowerForContrastiveLearning
BridgeTowerForMaskedLM
BridgeTowerForImageAndTextRetrieval
|
XLM
Overview
The XLM model was proposed in Cross-lingual Language Model Pretraining by
Guillaume Lample, Alexis Conneau. It’s a transformer pretrained using one of the following objectives:
a causal language modeling (CLM) objective (next token prediction),
a masked language modeling (MLM) objective (BERT-like), or
a Translation Language Modeling (TLM) object (extension of BERT’s MLM to multiple language inputs)
The abstract from the paper is the following:
Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding.
In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We
propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual
data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain
state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our
approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we
obtain 34.3 BLEU on WMT’16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised
machine translation, we obtain a new state of the art of 38.5 BLEU on WMT’16 Romanian-English, outperforming the
previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.
Tips:
XLM has many different checkpoints, which were trained using different objectives: CLM, MLM or TLM. Make sure to
select the correct objective for your task (e.g. MLM checkpoints are not suitable for generation).
XLM has multilingual checkpoints which leverage a specific lang parameter. Check out the multi-lingual page for more information.
A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them:
Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages.
Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens.
A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2.
This model was contributed by thomwolf. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XLMConfig
class transformers.XLMConfig
<
source
>
(
vocab_size = 30145
emb_dim = 2048
n_layers = 12
n_heads = 16
dropout = 0.1
attention_dropout = 0.1
gelu_activation = True
sinusoidal_embeddings = False
causal = False
asm = False
n_langs = 1
use_lang_emb = True
max_position_embeddings = 512
embed_init_std = 0.02209708691207961
layer_norm_eps = 1e-12
init_std = 0.02
bos_index = 0
eos_index = 1
pad_index = 2
unk_index = 3
mask_index = 5
is_encoder = True
summary_type = 'first'
summary_use_proj = True
summary_activation = None
summary_proj_to_labels = True
summary_first_dropout = 0.1
start_n_top = 5
end_n_top = 5
mask_token_id = 0
lang_id = 0
pad_token_id = 2
bos_token_id = 0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30145) —
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling XLMModel or TFXLMModel.
emb_dim (int, optional, defaults to 2048) —
Dimensionality of the encoder layers and the pooler layer.
n_layer (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention mechanism
gelu_activation (bool, optional, defaults to True) —
Whether or not to use gelu for the activations instead of relu.
sinusoidal_embeddings (bool, optional, defaults to False) —
Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings.
causal (bool, optional, defaults to False) —
Whether or not the model should behave in a causal manner. Causal models use a triangular attention mask in
order to only attend to the left-side context instead if a bidirectional context.
asm (bool, optional, defaults to False) —
Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction
layer.
n_langs (int, optional, defaults to 1) —
The number of languages the model handles. Set to 1 for monolingual models.
use_lang_emb (bool, optional, defaults to True) —
Whether to use language embeddings. Some models use additional language embeddings, see the multilingual
models page for information
on how to use them.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
embed_init_std (float, optional, defaults to 2048^-0.5) —
The standard deviation of the truncated_normal_initializer for initializing the embedding matrices.
init_std (int, optional, defaults to 50257) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices except the
embedding matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
bos_index (int, optional, defaults to 0) —
The index of the beginning of sentence token in the vocabulary.
eos_index (int, optional, defaults to 1) —
The index of the end of sentence token in the vocabulary.
pad_index (int, optional, defaults to 2) —
The index of the padding token in the vocabulary.
unk_index (int, optional, defaults to 3) —
The index of the unknown token in the vocabulary.
mask_index (int, optional, defaults to 5) —
The index of the masking token in the vocabulary.
is_encoder(bool, optional, defaults to True) —
Whether or not the initialized model should be a transformer encoder or decoder as seen in Vaswani et al.
summary_type (string, optional, defaults to “first”) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Has to be one of the following options:
"last": Take the last token hidden state (like XLNet).
"first": Take the first token hidden state (like BERT).
"mean": Take the mean of all tokens hidden states.
"cls_index": Supply a Tensor of classification token position (like GPT/GPT-2).
"attn": Not implemented now, use multi-head attention.
summary_use_proj (bool, optional, defaults to True) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Whether or not to add a projection after the vector extraction.
summary_activation (str, optional) —
Argument used when doing sequence summary. Used in the sequence classification and multiple choice models.
Pass "tanh" for a tanh activation to the output, any other value will result in no activation.
summary_proj_to_labels (bool, optional, defaults to True) —
Used in the sequence classification and multiple choice models.
Whether the projection outputs should have config.num_labels or config.hidden_size classes.
summary_first_dropout (float, optional, defaults to 0.1) —
Used in the sequence classification and multiple choice models.
The dropout ratio to be used after the projection and activation.
start_n_top (int, optional, defaults to 5) —
Used in the SQuAD evaluation script.
end_n_top (int, optional, defaults to 5) —
Used in the SQuAD evaluation script.
mask_token_id (int, optional, defaults to 0) —
Model agnostic parameter to identify masked tokens when generating text in an MLM context.
lang_id (int, optional, defaults to 1) —
The ID of the language used by the model. This parameter is used when generating text in a given language.
This is the configuration class to store the configuration of a XLMModel or a TFXLMModel. It is used to
instantiate a XLM model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
xlm-mlm-en-2048 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import XLMConfig, XLMModel
# Initializing a XLM configuration
configuration = XLMConfig()
# Initializing a model (with random weights) from the configuration
model = XLMModel(configuration)
# Accessing the model configuration
configuration = model.config
XLMTokenizer
class transformers.XLMTokenizer
<
source
>
(
vocab_file
merges_file
unk_token = '<unk>'
bos_token = '<s>'
sep_token = '</s>'
pad_token = '<pad>'
cls_token = '</s>'
mask_token = '<special1>'
additional_special_tokens = ['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>']
lang2id = None
id2lang = None
do_lowercase_and_remove_accent = True
**kwargs
)
Parameters
vocab_file (str) —
Vocabulary file.
merges_file (str) —
Merges file.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "</s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "<special1>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<special0>","<special1>","<special2>","<special3>","<special4>","<special5>","<special6>","<special7>","<special8>","<special9>"]) —
List of additional special tokens.
lang2id (Dict[str, int], optional) —
Dictionary mapping languages string identifiers to their IDs.
id2lang (Dict[int, str], optional) —
Dictionary mapping language IDs to their string identifiers.
do_lowercase_and_remove_accent (bool, optional, defaults to True) —
Whether to lowercase and remove accents when tokenizing.
Construct an XLM tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following:
Moses preprocessing and tokenization for most supported languages.
Language specific tokenization for Chinese (Jieba), Japanese (KyTea) and Thai (PyThaiNLP).
Optionally lowercases and normalizes all inputs text.
The arguments special_tokens and the function set_special_tokens, can be used to add additional symbols (like
”classify”) to a vocabulary.
The lang2id attribute maps the languages supported by the model with their IDs if provided (automatically set
for pretrained vocabularies).
The id2lang attributes does reverse mapping if provided (automatically set for pretrained vocabularies).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An XLM sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
XLM specific outputs
class transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
start_top_log_probs: typing.Optional[torch.FloatTensor] = None
start_top_index: typing.Optional[torch.LongTensor] = None
end_top_log_probs: typing.Optional[torch.FloatTensor] = None
end_top_index: typing.Optional[torch.LongTensor] = None
cls_logits: typing.Optional[torch.FloatTensor] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided) —
Classification loss as the sum of start token, end token (and is_impossible if provided) classification
losses.
start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) —
Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) —
Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) —
Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities
(beam-search).
end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) —
Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided) —
Log probabilities for the is_impossible label of the answers.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for outputs of question answering models using a SquadHead.
XLMModel
class transformers.XLMModel
<
source
>
(
config
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMModel
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMModel.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XLMWithLMHeadModel
class transformers.XLMWithLMHeadModel
<
source
>
(
config
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMWithLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMWithLMHeadModel
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMWithLMHeadModel.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("The capital of France is <special1>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <special1>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<special1> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
XLMForSequenceClassification
class transformers.XLMForSequenceClassification
<
source
>
(
config
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g.
for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLMForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, XLMForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = XLMForSequenceClassification.from_pretrained(
... "xlm-mlm-en-2048", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
XLMForMultipleChoice
class transformers.XLMForMultipleChoice
<
source
>
(
config
*inputs
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForMultipleChoice.from_pretrained("xlm-mlm-en-2048")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
XLMForTokenClassification
class transformers.XLMForTokenClassification
<
source
>
(
config
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForTokenClassification.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
XLMForQuestionAnsweringSimple
class transformers.XLMForQuestionAnsweringSimple
<
source
>
(
config
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMForQuestionAnsweringSimple forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMForQuestionAnsweringSimple
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForQuestionAnsweringSimple.from_pretrained("xlm-mlm-en-2048")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
XLMForQuestionAnswering
class transformers.XLMForQuestionAnswering
<
source
>
(
config
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a beam-search span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
langs: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
lengths: typing.Optional[torch.Tensor] = None
cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
is_impossible: typing.Optional[torch.Tensor] = None
cls_index: typing.Optional[torch.Tensor] = None
p_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (torch.LongTensor of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (torch.LongTensor of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, torch.FloatTensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
is_impossible (torch.LongTensor of shape (batch_size,), optional) —
Labels whether a question has an answer or no answer (SQuAD 2.0)
cls_index (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the classification token to use as input for computing plausibility of the
answer.
p_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Optional mask of tokens which can’t be in answers (e.g. [CLS], [PAD], …). 1.0 means token should be
masked. 0.0 mean token is not masked.
Returns
transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput or tuple(torch.FloatTensor)
A transformers.models.xlm.modeling_xlm.XLMForQuestionAnsweringOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned if both start_positions and end_positions are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification
losses.
start_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search).
start_top_index (torch.LongTensor of shape (batch_size, config.start_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search).
end_top_log_probs (torch.FloatTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the top config.start_n_top * config.end_n_top end token possibilities
(beam-search).
end_top_index (torch.LongTensor of shape (batch_size, config.start_n_top * config.end_n_top), optional, returned if start_positions or end_positions is not provided) — Indices for the top config.start_n_top * config.end_n_top end token possibilities (beam-search).
cls_logits (torch.FloatTensor of shape (batch_size,), optional, returned if start_positions or end_positions is not provided) — Log probabilities for the is_impossible label of the answers.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = XLMForQuestionAnswering.from_pretrained("xlm-mlm-en-2048")
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(
... 0
... ) # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
TFXLMModel
class transformers.TFXLMModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids = None
attention_mask = None
langs = None
token_type_ids = None
position_ids = None
lengths = None
cache = None
head_mask = None
inputs_embeds = None
output_attentions = None
output_hidden_states = None
return_dict = None
training = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = TFXLMModel.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFXLMWithLMHeadModel
class transformers.TFXLMWithLMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The XLM Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput or tuple(tf.Tensor)
A transformers.models.xlm.modeling_tf_xlm.TFXLMWithLMHeadModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMConfig) and inputs.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMWithLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMWithLMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = TFXLMWithLMHeadModel.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFXLMForSequenceClassification
class transformers.TFXLMForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g.
for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = TFXLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFXLMForSequenceClassification.from_pretrained("xlm-mlm-en-2048", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFXLMForMultipleChoice
class transformers.TFXLMForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, num_choices, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = TFXLMForMultipleChoice.from_pretrained("xlm-mlm-en-2048")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFXLMForTokenClassification
class transformers.TFXLMForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = TFXLMForTokenClassification.from_pretrained("xlm-mlm-en-2048")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFXLMForQuestionAnsweringSimple
class transformers.TFXLMForQuestionAnsweringSimple
<
source
>
(
*args
**kwargs
)
Parameters
config (XLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
XLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer
on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
langs: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
lengths: np.ndarray | tf.Tensor | None = None
cache: Optional[Dict[str, tf.Tensor]] = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are
languages ids which can be obtained from the language names by using two conversion mappings provided in
the configuration of the model (only provided for multilingual models). More precisely, the language name
to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the
language id to language name mapping is in model.config.id2lang (dictionary int to string).
See usage examples detailed in the multilingual documentation.
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) —
Length of each sentence that can be used to avoid performing attention on padding token indices. You can
also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in
[0, ..., input_ids.size(-1)].
cache (Dict[str, tf.Tensor], optional) —
Dictionary string to torch.FloatTensor that contains precomputed hidden states (key and values in the
attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential
decoding.
The dictionary object will be modified in-place during the forward pass to add newly computed
hidden-states.
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (XLMConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFXLMForQuestionAnsweringSimple forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFXLMForQuestionAnsweringSimple
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("xlm-mlm-en-2048")
model = TFXLMForQuestionAnsweringSimple.from_pretrained("xlm-mlm-en-2048")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←XGLM
XLM-ProphetNet→
XLM
Overview
Documentation resources
XLMConfig
XLMTokenizer
XLM specific outputs
XLMModel
XLMWithLMHeadModel
XLMForSequenceClassification
XLMForMultipleChoice
XLMForTokenClassification
XLMForQuestionAnsweringSimple
XLMForQuestionAnswering
TFXLMModel
TFXLMWithLMHeadModel
TFXLMForSequenceClassification
TFXLMForMultipleChoice
TFXLMForTokenClassification
TFXLMForQuestionAnsweringSimple
|
TAPAS
Overview
The TAPAS model was proposed in TAPAS: Weakly Supervised Table Parsing via Pre-training
by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. It’s a BERT-based model specifically
designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7
token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising
millions of tables from English Wikipedia and corresponding texts.
For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets:
SQA (Sequential Question Answering by Microsoft)
WTQ (Wiki Table Questions by Stanford University)
WikiSQL (by Salesforce).
It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture.
The abstract from the paper is the following:
Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT’s architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.
In addition, the authors have further pre-trained TAPAS to recognize table entailment, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on TabFact, a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: Understanding tables with intermediate pre-training by Julian Martin Eisenschlos, Syrine Krichene and Thomas Müller.
TAPAS architecture. Taken from the original blog post.
This model was contributed by nielsr. The Tensorflow version of this model was contributed by kamalkraj. The original code can be found here.
Tips:
TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the reset_position_index_per_cell parameter of TapasConfig, which is set to True by default. The default versions of the models available on the hub all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument revision="no_reset" when calling the from_pretrained() method. Note that it’s usually advised to pad the inputs on the right rather than the left.
TAPAS is based on BERT, so TAPAS-base for example corresponds to a BERT-base architecture. Of course, TAPAS-large will result in the best performance (the results reported in the paper are from TAPAS-large). Results of the various sized models are shown on the original Github repository.
TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as “what is his age?” related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the prev_labels token type ids can be overwritten by the predicted labels of the model to the previous question. See “Usage” section for more info.
TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2.
Usage: fine-tuning
Here we explain how you can fine-tune TapasForQuestionAnswering on your own dataset.
STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment
Basically, there are 3 different ways in which one can fine-tune TapasForQuestionAnswering, corresponding to the different datasets on which Tapas was fine-tuned:
SQA: if you’re interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask “what’s the name of the first actor?” then you can ask a follow-up question such as “how old is he?“. Here, questions do not involve any aggregation (all questions are cell selection questions).
WTQ: if you’re not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask “what’s the total number of goals Cristiano Ronaldo made in his career?“. This case is also called weak supervision, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision.
WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called strong supervision. Here, learning the appropriate aggregation operator is much easier.
To summarize:
Task
Example dataset
Description
Conversational
SQA
Conversational, only cell selection questions
Weak supervision for aggregation
WTQ
Questions might involve aggregation, and the model must learn this given only the answer as supervision
Strong supervision for aggregation
WikiSQL-supervised
Questions might involve aggregation, and the model must learn this given the gold aggregation operator
Pytorch
Hide Pytorch content
Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below.
Copied
from transformers import TapasConfig, TapasForQuestionAnswering
# for example, the base sized model with default SQA configuration
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base")
# or, the base sized model with WTQ configuration
config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
# or, the base sized model with WikiSQL configuration
config = TapasConfig("google-base-finetuned-wikisql-supervised")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
Of course, you don’t necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing TapasConfig, and then create a TapasForQuestionAnswering based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here’s an example:
Copied
from transformers import TapasConfig, TapasForQuestionAnswering
# you can initialize the classification heads any way you want (see docs of TapasConfig)
config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True)
# initializing the pre-trained base sized model with our custom classification heads
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
TensorFlow
Hide TensorFlow content
Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the tensorflow_probability dependency:
Copied
from transformers import TapasConfig, TFTapasForQuestionAnswering
# for example, the base sized model with default SQA configuration
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base")
# or, the base sized model with WTQ configuration
config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq")
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
# or, the base sized model with WikiSQL configuration
config = TapasConfig("google-base-finetuned-wikisql-supervised")
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
Of course, you don’t necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing TapasConfig, and then create a TFTapasForQuestionAnswering based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here’s an example:
Copied
from transformers import TapasConfig, TFTapasForQuestionAnswering
# you can initialize the classification heads any way you want (see docs of TapasConfig)
config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True)
# initializing the pre-trained base sized model with our custom classification heads
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See here for more info.
For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace’s hub, see here.
STEP 2: Prepare your data in the SQA format
Second, no matter what you picked above, you should prepare your dataset in the SQA format. This format is a TSV/CSV file with the following columns:
id: optional, id of the table-question pair, for bookkeeping purposes.
annotator: optional, id of the person who annotated the table-question pair, for bookkeeping purposes.
position: integer indicating if the question is the first, second, third,… related to the table. Only required in case of conversational setup (SQA). You don’t need this column in case you’re going for WTQ/WikiSQL-supervised.
question: string
table_file: string, name of a csv file containing the tabular data
answer_coordinates: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer)
answer_text: list of one or more strings (each string being a cell value that is part of the answer)
aggregation_label: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case)
float_answer: the float answer to the question, if there is one (np.nan if there isn’t). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL)
The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this here. A conversion of this script that works with HuggingFace’s implementation can be found here. Interestingly, these conversion scripts are not perfect (the answer_coordinates and float_answer fields are populated based on the answer_text), meaning that WTQ and WikiSQL results could actually be improved.
STEP 3: Convert your data into tensors using TapasTokenizer
Pytorch
Hide Pytorch content
Third, given that you’ve prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use TapasTokenizer to convert table-question pairs into input_ids, attention_mask, token_type_ids and so on. Again, based on which of the three cases you picked above, TapasForQuestionAnswering requires different
inputs to be fine-tuned:
Task
Required inputs
Conversational
input_ids, attention_mask, token_type_ids, labels
Weak supervision for aggregation
input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer
Strong supervision for aggregation
input ids, attention mask, token type ids, labels, aggregation_labels
TapasTokenizer creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here’s an example:
Copied
from transformers import TapasTokenizer
import pandas as pd
model_name = "google/tapas-base"
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
... "What is the name of the first actor?",
... "How many movies has George Clooney played in?",
... "What is the total number of movies?",
... ]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
... table=table,
... queries=queries,
... answer_coordinates=answer_coordinates,
... answer_text=answer_text,
... padding="max_length",
... return_tensors="pt",
... )
inputs
{'input_ids': tensor([[ ... ]]), 'attention_mask': tensor([[...]]), 'token_type_ids': tensor([[[...]]]),
'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])}
Note that TapasTokenizer expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data.
Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches:
Copied
import torch
import pandas as pd
tsv_path = "your_path_to_the_tsv_file"
table_csv_path = "your_path_to_a_directory_containing_all_csv_files"
class TableDataset(torch.utils.data.Dataset):
... def __init__(self, data, tokenizer):
... self.data = data
... self.tokenizer = tokenizer
... def __getitem__(self, idx):
... item = data.iloc[idx]
... table = pd.read_csv(table_csv_path + item.table_file).astype(
... str
... ) # be sure to make your table data text only
... encoding = self.tokenizer(
... table=table,
... queries=item.question,
... answer_coordinates=item.answer_coordinates,
... answer_text=item.answer_text,
... truncation=True,
... padding="max_length",
... return_tensors="pt",
... )
... # remove the batch dimension which the tokenizer adds by default
... encoding = {key: val.squeeze(0) for key, val in encoding.items()}
... # add the float_answer which is also required (weak supervision for aggregation case)
... encoding["float_answer"] = torch.tensor(item.float_answer)
... return encoding
... def __len__(self):
... return len(self.data)
data = pd.read_csv(tsv_path, sep="\t")
train_dataset = TableDataset(data, tokenizer)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
TensorFlow
Hide TensorFlow content
Third, given that you’ve prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use TapasTokenizer to convert table-question pairs into input_ids, attention_mask, token_type_ids and so on. Again, based on which of the three cases you picked above, TFTapasForQuestionAnswering requires different
inputs to be fine-tuned:
Task
Required inputs
Conversational
input_ids, attention_mask, token_type_ids, labels
Weak supervision for aggregation
input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer
Strong supervision for aggregation
input ids, attention mask, token type ids, labels, aggregation_labels
TapasTokenizer creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here’s an example:
Copied
from transformers import TapasTokenizer
import pandas as pd
model_name = "google/tapas-base"
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
... "What is the name of the first actor?",
... "How many movies has George Clooney played in?",
... "What is the total number of movies?",
... ]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
... table=table,
... queries=queries,
... answer_coordinates=answer_coordinates,
... answer_text=answer_text,
... padding="max_length",
... return_tensors="tf",
... )
inputs
{'input_ids': tensor([[ ... ]]), 'attention_mask': tensor([[...]]), 'token_type_ids': tensor([[[...]]]),
'numeric_values': tensor([[ ... ]]), 'numeric_values_scale: tensor([[ ... ]]), labels: tensor([[ ... ]])}
Note that TapasTokenizer expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data.
Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches:
Copied
import tensorflow as tf
import pandas as pd
tsv_path = "your_path_to_the_tsv_file"
table_csv_path = "your_path_to_a_directory_containing_all_csv_files"
class TableDataset:
... def __init__(self, data, tokenizer):
... self.data = data
... self.tokenizer = tokenizer
... def __iter__(self):
... for idx in range(self.__len__()):
... item = self.data.iloc[idx]
... table = pd.read_csv(table_csv_path + item.table_file).astype(
... str
... ) # be sure to make your table data text only
... encoding = self.tokenizer(
... table=table,
... queries=item.question,
... answer_coordinates=item.answer_coordinates,
... answer_text=item.answer_text,
... truncation=True,
... padding="max_length",
... return_tensors="tf",
... )
... # remove the batch dimension which the tokenizer adds by default
... encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()}
... # add the float_answer which is also required (weak supervision for aggregation case)
... encoding["float_answer"] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32)
... yield encoding["input_ids"], encoding["attention_mask"], encoding["numeric_values"], encoding[
... "numeric_values_scale"
... ], encoding["token_type_ids"], encoding["labels"], encoding["float_answer"]
... def __len__(self):
... return len(self.data)
data = pd.read_csv(tsv_path, sep="\t")
train_dataset = TableDataset(data, tokenizer)
output_signature = (
... tf.TensorSpec(shape=(512,), dtype=tf.int32),
... tf.TensorSpec(shape=(512,), dtype=tf.int32),
... tf.TensorSpec(shape=(512,), dtype=tf.float32),
... tf.TensorSpec(shape=(512,), dtype=tf.float32),
... tf.TensorSpec(shape=(512, 7), dtype=tf.int32),
... tf.TensorSpec(shape=(512,), dtype=tf.int32),
... tf.TensorSpec(shape=(512,), dtype=tf.float32),
... )
train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32)
Note that here, we encode each table-question pair independently. This is fine as long as your dataset is not conversational. In case your dataset involves conversational questions (such as in SQA), then you should first group together the queries, answer_coordinates and answer_text per table (in the order of their position
index) and batch encode each table with its questions. This will make sure that the prev_labels token types (see docs of TapasTokenizer) are set correctly. See this notebook for more info. See this notebook for more info regarding using the TensorFlow model.
**STEP 4: Train (fine-tune) the model
Pytorch
Hide Pytorch content
You can then fine-tune TapasForQuestionAnswering as follows (shown here for the weak supervision for aggregation case):
Copied
from transformers import TapasConfig, TapasForQuestionAnswering, AdamW
# this is the default WTQ configuration
config = TapasConfig(
... num_aggregation_labels=4,
... use_answer_as_supervision=True,
... answer_loss_cutoff=0.664694,
... cell_selection_preference=0.207951,
... huber_loss_delta=0.121194,
... init_cell_selection_weights_to_zero=True,
... select_one_column=True,
... allow_empty_column_selection=False,
... temperature=0.0352513,
... )
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
optimizer = AdamW(model.parameters(), lr=5e-5)
model.train()
for epoch in range(2): # loop over the dataset multiple times
... for batch in train_dataloader:
... # get the inputs;
... input_ids = batch["input_ids"]
... attention_mask = batch["attention_mask"]
... token_type_ids = batch["token_type_ids"]
... labels = batch["labels"]
... numeric_values = batch["numeric_values"]
... numeric_values_scale = batch["numeric_values_scale"]
... float_answer = batch["float_answer"]
... # zero the parameter gradients
... optimizer.zero_grad()
... # forward + backward + optimize
... outputs = model(
... input_ids=input_ids,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=labels,
... numeric_values=numeric_values,
... numeric_values_scale=numeric_values_scale,
... float_answer=float_answer,
... )
... loss = outputs.loss
... loss.backward()
... optimizer.step()
TensorFlow
Hide TensorFlow content
You can then fine-tune TFTapasForQuestionAnswering as follows (shown here for the weak supervision for aggregation case):
Copied
import tensorflow as tf
from transformers import TapasConfig, TFTapasForQuestionAnswering
# this is the default WTQ configuration
config = TapasConfig(
... num_aggregation_labels=4,
... use_answer_as_supervision=True,
... answer_loss_cutoff=0.664694,
... cell_selection_preference=0.207951,
... huber_loss_delta=0.121194,
... init_cell_selection_weights_to_zero=True,
... select_one_column=True,
... allow_empty_column_selection=False,
... temperature=0.0352513,
... )
model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config)
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
for epoch in range(2): # loop over the dataset multiple times
... for batch in train_dataloader:
... # get the inputs;
... input_ids = batch[0]
... attention_mask = batch[1]
... token_type_ids = batch[4]
... labels = batch[-1]
... numeric_values = batch[2]
... numeric_values_scale = batch[3]
... float_answer = batch[6]
... # forward + backward + optimize
... with tf.GradientTape() as tape:
... outputs = model(
... input_ids=input_ids,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=labels,
... numeric_values=numeric_values,
... numeric_values_scale=numeric_values_scale,
... float_answer=float_answer,
... )
... grads = tape.gradient(outputs.loss, model.trainable_weights)
... optimizer.apply_gradients(zip(grads, model.trainable_weights))
Usage: inference
Pytorch
Hide Pytorch content
Here we explain how you can use TapasForQuestionAnswering or TFTapasForQuestionAnswering for inference (i.e. making predictions on new data). For inference, only input_ids, attention_mask and token_type_ids (which you can obtain using TapasTokenizer) have to be provided to the model to obtain the logits. Next, you can use the handy ~models.tapas.tokenization_tapas.convert_logits_to_predictions method to convert these into predicted coordinates and optional aggregation indices.
However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here’s an example of that:
Copied
from transformers import TapasTokenizer, TapasForQuestionAnswering
import pandas as pd
model_name = "google/tapas-base-finetuned-wtq"
model = TapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
... "What is the name of the first actor?",
... "How many movies has George Clooney played in?",
... "What is the total number of movies?",
... ]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt")
outputs = model(**inputs)
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
... inputs, outputs.logits.detach(), outputs.logits_aggregation.detach()
... )
# let's print out the results:
id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}
aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
answers = []
for coordinates in predicted_answer_coordinates:
... if len(coordinates) == 1:
... # only a single cell:
... answers.append(table.iat[coordinates[0]])
... else:
... # multiple cells
... cell_values = []
... for coordinate in coordinates:
... cell_values.append(table.iat[coordinate])
... answers.append(", ".join(cell_values))
display(table)
print("")
for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string):
... print(query)
... if predicted_agg == "NONE":
... print("Predicted answer: " + answer)
... else:
... print("Predicted answer: " + predicted_agg + " > " + answer)
What is the name of the first actor?
Predicted answer: Brad Pitt
How many movies has George Clooney played in?
Predicted answer: COUNT > 69
What is the total number of movies?
Predicted answer: SUM > 87, 53, 69
TensorFlow
Hide TensorFlow content
Here we explain how you can use TFTapasForQuestionAnswering for inference (i.e. making predictions on new data). For inference, only input_ids, attention_mask and token_type_ids (which you can obtain using TapasTokenizer) have to be provided to the model to obtain the logits. Next, you can use the handy ~models.tapas.tokenization_tapas.convert_logits_to_predictions method to convert these into predicted coordinates and optional aggregation indices.
However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here’s an example of that:
Copied
from transformers import TapasTokenizer, TFTapasForQuestionAnswering
import pandas as pd
model_name = "google/tapas-base-finetuned-wtq"
model = TFTapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
queries = [
... "What is the name of the first actor?",
... "How many movies has George Clooney played in?",
... "What is the total number of movies?",
... ]
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf")
outputs = model(**inputs)
predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(
... inputs, outputs.logits, outputs.logits_aggregation
... )
# let's print out the results:
id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}
aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices]
answers = []
for coordinates in predicted_answer_coordinates:
... if len(coordinates) == 1:
... # only a single cell:
... answers.append(table.iat[coordinates[0]])
... else:
... # multiple cells
... cell_values = []
... for coordinate in coordinates:
... cell_values.append(table.iat[coordinate])
... answers.append(", ".join(cell_values))
display(table)
print("")
for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string):
... print(query)
... if predicted_agg == "NONE":
... print("Predicted answer: " + answer)
... else:
... print("Predicted answer: " + predicted_agg + " > " + answer)
What is the name of the first actor?
Predicted answer: Brad Pitt
How many movies has George Clooney played in?
Predicted answer: COUNT > 69
What is the total number of movies?
Predicted answer: SUM > 87, 53, 69
In case of a conversational set-up, then each table-question pair must be provided sequentially to the model, such that the prev_labels token types can be overwritten by the predicted labels of the previous table-question pair. Again, more info can be found in this notebook (for PyTorch) and this notebook (for TensorFlow).
Documentation resources
Text classification task guide
Masked language modeling task guide
TAPAS specific outputs
class transformers.models.tapas.modeling_tapas.TableQuestionAnsweringOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
logits_aggregation: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels (and possibly answer, aggregation_labels, numeric_values and numeric_values_scale are provided)) —
Total loss as the sum of the hierarchical cell selection log-likelihood loss and (optionally) the
semi-supervised regression loss and (optionally) supervised loss for aggregations.
logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Prediction scores of the cell selection head, for every token.
logits_aggregation (torch.FloatTensor, optional, of shape (batch_size, num_aggregation_labels)) —
Prediction scores of the aggregation head, for every aggregation operator.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
Output type of TapasForQuestionAnswering.
TapasConfig
class transformers.TapasConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 1024
type_vocab_sizes = [3, 256, 256, 2, 256, 256, 10]
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
positive_label_weight = 10.0
num_aggregation_labels = 0
aggregation_loss_weight = 1.0
use_answer_as_supervision = None
answer_loss_importance = 1.0
use_normalized_answer_loss = False
huber_loss_delta = None
temperature = 1.0
aggregation_temperature = 1.0
use_gumbel_for_cells = False
use_gumbel_for_aggregation = False
average_approximation_function = 'ratio'
cell_selection_preference = None
answer_loss_cutoff = None
max_num_rows = 64
max_num_columns = 32
average_logits_per_cell = False
select_one_column = True
allow_empty_column_selection = False
init_cell_selection_weights_to_zero = False
reset_position_index_per_cell = True
disable_per_token_loss = False
aggregation_labels = None
no_aggregation_label_index = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the TAPAS model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling TapasModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "swish" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_sizes (List[int], optional, defaults to [3, 256, 256, 2, 256, 256, 10]) —
The vocabulary sizes of the token_type_ids passed when calling TapasModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
positive_label_weight (float, optional, defaults to 10.0) —
Weight for positive labels.
num_aggregation_labels (int, optional, defaults to 0) —
The number of aggregation operators to predict.
aggregation_loss_weight (float, optional, defaults to 1.0) —
Importance weight for the aggregation loss.
use_answer_as_supervision (bool, optional) —
Whether to use the answer as the only supervision for aggregation examples.
answer_loss_importance (float, optional, defaults to 1.0) —
Importance weight for the regression loss.
use_normalized_answer_loss (bool, optional, defaults to False) —
Whether to normalize the answer loss by the maximum of the predicted and expected value.
huber_loss_delta (float, optional) —
Delta parameter used to calculate the regression loss.
temperature (float, optional, defaults to 1.0) —
Value used to control (OR change) the skewness of cell logits probabilities.
aggregation_temperature (float, optional, defaults to 1.0) —
Scales aggregation logits to control the skewness of probabilities.
use_gumbel_for_cells (bool, optional, defaults to False) —
Whether to apply Gumbel-Softmax to cell selection.
use_gumbel_for_aggregation (bool, optional, defaults to False) —
Whether to apply Gumbel-Softmax to aggregation selection.
average_approximation_function (string, optional, defaults to "ratio") —
Method to calculate the expected average of cells in the weak supervision case. One of "ratio",
"first_order" or "second_order".
cell_selection_preference (float, optional) —
Preference for cell selection in ambiguous cases. Only applicable in case of weak supervision for
aggregation (WTQ, WikiSQL). If the total mass of the aggregation probabilities (excluding the “NONE”
operator) is higher than this hyperparameter, then aggregation is predicted for an example.
answer_loss_cutoff (float, optional) —
Ignore examples with answer loss larger than cutoff.
max_num_rows (int, optional, defaults to 64) —
Maximum number of rows.
max_num_columns (int, optional, defaults to 32) —
Maximum number of columns.
average_logits_per_cell (bool, optional, defaults to False) —
Whether to average logits per cell.
select_one_column (bool, optional, defaults to True) —
Whether to constrain the model to only select cells from a single column.
allow_empty_column_selection (bool, optional, defaults to False) —
Whether to allow not to select any column.
init_cell_selection_weights_to_zero (bool, optional, defaults to False) —
Whether to initialize cell selection weights to 0 so that the initial probabilities are 50%.
reset_position_index_per_cell (bool, optional, defaults to True) —
Whether to restart position indexes at every cell (i.e. use relative position embeddings).
disable_per_token_loss (bool, optional, defaults to False) —
Whether to disable any (strong or weak) supervision on cells.
aggregation_labels (Dict[int, label], optional) —
The aggregation labels used to aggregate the results. For example, the WTQ models have the following
aggregation labels: {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}
no_aggregation_label_index (int, optional) —
If the aggregation labels are defined and one of these labels represents “No aggregation”, this should be
set to its index. For example, the WTQ models have the “NONE” aggregation label at index 0, so that value
should be set to 0 for these models.
This is the configuration class to store the configuration of a TapasModel. It is used to instantiate a TAPAS
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the TAPAS
google/tapas-base-finetuned-sqa architecture.
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Hyperparameters additional to BERT are taken from run_task_main.py and hparam_utils.py of the original
implementation. Original implementation available at https://github.com/google-research/tapas/tree/master.
Example:
Copied
from transformers import TapasModel, TapasConfig
# Initializing a default (SQA) Tapas configuration
configuration = TapasConfig()
# Initializing a model from the configuration
model = TapasModel(configuration)
# Accessing the model configuration
configuration = model.config
TapasTokenizer
class transformers.TapasTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
empty_token = '[EMPTY]'
tokenize_chinese_chars = True
strip_accents = None
cell_trim_length: int = -1
max_column_id: int = None
max_row_id: int = None
strip_column_names: bool = False
update_answer_coordinates: bool = False
min_question_length = None
max_question_length = None
model_max_length: int = 512
additional_special_tokens: typing.Optional[typing.List[str]] = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
empty_token (str, optional, defaults to "[EMPTY]") —
The token used for empty cell values in a table. Empty cell values include "", “n/a”, “nan” and ”?“.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
cell_trim_length (int, optional, defaults to -1) —
If > 0: Trim cells so that the length is <= this value. Also disables further cell trimming, should thus be
used with truncation set to True.
max_column_id (int, optional) —
Max column id to extract.
max_row_id (int, optional) —
Max row id to extract.
strip_column_names (bool, optional, defaults to False) —
Whether to add empty strings instead of column names.
update_answer_coordinates (bool, optional, defaults to False) —
Whether to recompute the answer coordinates from the answer text.
min_question_length (int, optional) —
Minimum length of each question in terms of tokens (will be skipped otherwise).
max_question_length (int, optional) —
Maximum length of each question in terms of tokens (will be skipped otherwise).
Construct a TAPAS tokenizer. Based on WordPiece. Flattens a table and one or more related sentences to be used by
TAPAS models.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods. TapasTokenizer creates several token type ids to
encode tabular structure. To be more precise, it adds 7 token type ids, in the following order: segment_ids,
column_ids, row_ids, prev_labels, column_ranks, inv_column_ranks and numeric_relations:
segment_ids: indicate whether a token belongs to the question (0) or the table (1). 0 for special tokens and
padding.
column_ids: indicate to which column of the table a token belongs (starting from 1). Is 0 for all question
tokens, special tokens and padding.
row_ids: indicate to which row of the table a token belongs (starting from 1). Is 0 for all question tokens,
special tokens and padding. Tokens of column headers are also 0.
prev_labels: indicate whether a token was (part of) an answer to the previous question (1) or not (0). Useful in
a conversational setup (such as SQA).
column_ranks: indicate the rank of a table token relative to a column, if applicable. For example, if you have a
column “number of movies” with values 87, 53 and 69, then the column ranks of these tokens are 3, 1 and 2
respectively. 0 for all question tokens, special tokens and padding.
inv_column_ranks: indicate the inverse rank of a table token relative to a column, if applicable. For example, if
you have a column “number of movies” with values 87, 53 and 69, then the inverse column ranks of these tokens are
1, 3 and 2 respectively. 0 for all question tokens, special tokens and padding.
numeric_relations: indicate numeric relations between the question and the tokens of the table. 0 for all
question tokens, special tokens and padding.
TapasTokenizer runs end-to-end tokenization on a table and associated sentences: punctuation splitting and
wordpiece.
__call__
<
source
>
(
table: pd.DataFrame
queries: typing.Union[str, typing.List[str], typing.List[int], typing.List[typing.List[str]], typing.List[typing.List[int]], NoneType] = None
answer_coordinates: typing.Union[typing.List[typing.Tuple], typing.List[typing.List[typing.Tuple]], NoneType] = None
answer_text: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.models.tapas.tokenization_tapas.TapasTruncationStrategy] = False
max_length: typing.Optional[int] = None
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
Parameters
table (pd.DataFrame) —
Table containing tabular data. Note that all cell values must be text. Use .astype(str) on a Pandas
dataframe to convert it to string.
queries (str or List[str]) —
Question or batch of questions related to a table to be encoded. Note that in case of a batch, all
questions must refer to the same table.
answer_coordinates (List[Tuple] or List[List[Tuple]], optional) —
Answer coordinates of each table-question pair in the batch. In case only a single table-question pair
is provided, then the answer_coordinates must be a single list of one or more tuples. Each tuple must
be a (row_index, column_index) pair. The first data row (not the column header row) has index 0. The
first column has index 0. In case a batch of table-question pairs is provided, then the
answer_coordinates must be a list of lists of tuples (each list corresponding to a single
table-question pair).
answer_text (List[str] or List[List[str]], optional) —
Answer text of each table-question pair in the batch. In case only a single table-question pair is
provided, then the answer_text must be a single list of one or more strings. Each string must be the
answer text of a corresponding answer coordinate. In case a batch of table-question pairs is provided,
then the answer_coordinates must be a list of lists of strings (each list corresponding to a single
table-question pair).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TapasTruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'drop_rows_to_fit': Truncate to a maximum length specified with the argument max_length
or to the maximum acceptable input length for the model if that argument is not provided. This will
truncate row by row, removing rows from the table.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Main method to tokenize and prepare for the model one or several sequence(s) related to a table.
convert_logits_to_predictions
<
source
>
(
data
logits
logits_agg = None
cell_classification_threshold = 0.5
)
→
tuple comprising various elements depending on the inputs
Parameters
data (dict) —
Dictionary mapping features to actual values. Should be created using TapasTokenizer.
logits (torch.Tensor or tf.Tensor of shape (batch_size, sequence_length)) —
Tensor containing the logits at the token level.
logits_agg (torch.Tensor or tf.Tensor of shape (batch_size, num_aggregation_labels), optional) —
Tensor containing the aggregation logits.
cell_classification_threshold (float, optional, defaults to 0.5) —
Threshold to be used for cell selection. All table cells for which their probability is larger than
this threshold will be selected.
Returns
tuple comprising various elements depending on the inputs
predicted_answer_coordinates (List[List[[tuple]] of length batch_size): Predicted answer coordinates
as a list of lists of tuples. Each element in the list contains the predicted answer coordinates of a
single example in the batch, as a list of tuples. Each tuple is a cell, i.e. (row index, column index).
predicted_aggregation_indices (List[int]of length batch_size, optional, returned when
logits_aggregation is provided): Predicted aggregation operator indices of the aggregation head.
Converts logits of TapasForQuestionAnswering to actual predicted answer coordinates and optional
aggregation indices.
The original implementation, on which this function is based, can be found
here.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
TapasModel
class transformers.TapasModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Tapas Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
This class is a small change compared to BertModel, taking into account the additional token type ids.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: - 1
indicates the head is not masked, - 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TapasConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TapasModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasModel
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base")
model = TapasModel.from_pretrained("google/tapas-base")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
TapasForMaskedLM
class transformers.TapasForMaskedLM
<
source
>
(
config
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tapas Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: - 1
indicates the head is not masked, - 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TapasConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TapasForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasForMaskedLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base")
model = TapasForMaskedLM.from_pretrained("google/tapas-base")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
... table=table, queries="How many [MASK] has George [MASK] played in?", return_tensors="pt"
... )
labels = tokenizer(
... table=table, queries="How many movies has George Clooney played in?", return_tensors="pt"
... )["input_ids"]
outputs = model(**inputs, labels=labels)
logits = outputs.logits
TapasForSequenceClassification
class transformers.TapasForSequenceClassification
<
source
>
(
config
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tapas Model with a sequence classification head on top (a linear layer on top of the pooled output), e.g. for table
entailment tasks, such as TabFact (Chen et al., 2020).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: - 1
indicates the head is not masked, - 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy). Note: this is called
“classification_class_index” in the original implementation.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TapasConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TapasForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasForSequenceClassification
import torch
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-tabfact")
model = TapasForSequenceClassification.from_pretrained("google/tapas-base-finetuned-tabfact")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
queries = [
... "There is only one actor who is 45 years old",
... "There are 3 actors which played in more than 60 movies",
... ]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt")
labels = torch.tensor([1, 0]) # 1 means entailed, 0 means refuted
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
TapasForQuestionAnswering
class transformers.TapasForQuestionAnswering
<
source
>
(
config: TapasConfig
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tapas Model with a cell selection head and optional aggregation head on top for question-answering tasks on tables
(linear layers on top of the hidden-states output to compute logits and optional logits_aggregation), e.g. for
SQA, WTQ or WikiSQL-supervised tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its models (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
table_mask: typing.Optional[torch.LongTensor] = None
labels: typing.Optional[torch.LongTensor] = None
aggregation_labels: typing.Optional[torch.LongTensor] = None
float_answer: typing.Optional[torch.FloatTensor] = None
numeric_values: typing.Optional[torch.FloatTensor] = None
numeric_values_scale: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.tapas.modeling_tapas.TableQuestionAnsweringOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: - 1
indicates the head is not masked, - 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
table_mask (torch.LongTensor of shape (batch_size, seq_length), optional) —
Mask for the table. Indicates which tokens belong to the table (1). Question tokens, table headers and
padding are 0.
labels (torch.LongTensor of shape (batch_size, seq_length), optional) —
Labels per token for computing the hierarchical cell selection loss. This encodes the positions of the
answer appearing in the table. Can be obtained using AutoTokenizer.
1 for tokens that are part of the answer,
0 for tokens that are not part of the answer.
aggregation_labels (torch.LongTensor of shape (batch_size, ), optional) —
Aggregation function index for every example in the batch for computing the aggregation loss. Indices
should be in [0, ..., config.num_aggregation_labels - 1]. Only required in case of strong supervision for
aggregation (WikiSQL-supervised).
float_answer (torch.FloatTensor of shape (batch_size, ), optional) —
Float answer for every example in the batch. Set to float(‘nan’) for cell selection questions. Only
required in case of weak supervision (WTQ) to calculate the aggregate mask and regression loss.
numeric_values (torch.FloatTensor of shape (batch_size, seq_length), optional) —
Numeric values of every token, NaN for tokens which are not numeric values. Can be obtained using
AutoTokenizer. Only required in case of weak supervision for aggregation (WTQ) to calculate the
regression loss.
numeric_values_scale (torch.FloatTensor of shape (batch_size, seq_length), optional) —
Scale of the numeric values of every token. Can be obtained using AutoTokenizer. Only required in case
of weak supervision for aggregation (WTQ) to calculate the regression loss.
Returns
transformers.models.tapas.modeling_tapas.TableQuestionAnsweringOutput or tuple(torch.FloatTensor)
A transformers.models.tapas.modeling_tapas.TableQuestionAnsweringOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TapasConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels (and possibly answer, aggregation_labels, numeric_values and numeric_values_scale are provided)) — Total loss as the sum of the hierarchical cell selection log-likelihood loss and (optionally) the
semi-supervised regression loss and (optionally) supervised loss for aggregations.
logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the cell selection head, for every token.
logits_aggregation (torch.FloatTensor, optional, of shape (batch_size, num_aggregation_labels)) — Prediction scores of the aggregation head, for every aggregation operator.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TapasForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasForQuestionAnswering
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wtq")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
logits_aggregation = outputs.logits_aggregation
TFTapasModel
class transformers.TFTapasModel
<
source
>
(
*args
**kwargs
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Tapas Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TapasConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFTapasModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasModel
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base")
model = TapasModel.from_pretrained("google/tapas-base")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
TFTapasForMaskedLM
class transformers.TFTapasForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tapas Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TapasConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFTapasForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasForMaskedLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base")
model = TapasForMaskedLM.from_pretrained("google/tapas-base")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
inputs = tokenizer(
... table=table, queries="How many [MASK] has George [MASK] played in?", return_tensors="tf"
... )
labels = tokenizer(
... table=table, queries="How many movies has George Clooney played in?", return_tensors="tf"
... )["input_ids"]
outputs = model(**inputs, labels=labels)
logits = outputs.logits
TFTapasForSequenceClassification
class transformers.TFTapasForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tapas Model with a sequence classification head on top (a linear layer on top of the pooled output), e.g. for table
entailment tasks, such as TabFact (Chen et al., 2020).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy). Note: this is called
“classification_class_index” in the original implementation.
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TapasConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFTapasForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasForSequenceClassification
import tensorflow as tf
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-tabfact")
model = TapasForSequenceClassification.from_pretrained("google/tapas-base-finetuned-tabfact")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
queries = [
... "There is only one actor who is 45 years old",
... "There are 3 actors which played in more than 60 movies",
... ]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf")
labels = tf.convert_to_tensor([1, 0]) # 1 means entailed, 0 means refuted
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
TFTapasForQuestionAnswering
class transformers.TFTapasForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (TapasConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tapas Model with a cell selection head and optional aggregation head on top for question-answering tasks on tables
(linear layers on top of the hidden-states output to compute logits and optional logits_aggregation), e.g. for
SQA, WTQ or WikiSQL-supervised tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
table_mask: np.ndarray | tf.Tensor | None = None
aggregation_labels: np.ndarray | tf.Tensor | None = None
float_answer: np.ndarray | tf.Tensor | None = None
numeric_values: np.ndarray | tf.Tensor | None = None
numeric_values_scale: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.tapas.modeling_tf_tapas.TFTableQuestionAnsweringOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, 7), optional) —
Token indices that encode tabular structure. Indices can be obtained using AutoTokenizer. See this
class for more info.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. If
reset_position_index_per_cell of TapasConfig is set to True, relative position embeddings will be
used. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
table_mask (tf.Tensor of shape (batch_size, seq_length), optional) —
Mask for the table. Indicates which tokens belong to the table (1). Question tokens, table headers and
padding are 0.
labels (tf.Tensor of shape (batch_size, seq_length), optional) —
Labels per token for computing the hierarchical cell selection loss. This encodes the positions of the
answer appearing in the table. Can be obtained using AutoTokenizer.
1 for tokens that are part of the answer,
0 for tokens that are not part of the answer.
aggregation_labels (tf.Tensor of shape (batch_size, ), optional) —
Aggregation function index for every example in the batch for computing the aggregation loss. Indices
should be in [0, ..., config.num_aggregation_labels - 1]. Only required in case of strong supervision for
aggregation (WikiSQL-supervised).
float_answer (tf.Tensor of shape (batch_size, ), optional) —
Float answer for every example in the batch. Set to float(‘nan’) for cell selection questions. Only
required in case of weak supervision (WTQ) to calculate the aggregate mask and regression loss.
numeric_values (tf.Tensor of shape (batch_size, seq_length), optional) —
Numeric values of every token, NaN for tokens which are not numeric values. Can be obtained using
AutoTokenizer. Only required in case of weak supervision for aggregation (WTQ) to calculate the
regression loss.
numeric_values_scale (tf.Tensor of shape (batch_size, seq_length), optional) —
Scale of the numeric values of every token. Can be obtained using AutoTokenizer. Only required in case
of weak supervision for aggregation (WTQ) to calculate the regression loss.
Returns
transformers.models.tapas.modeling_tf_tapas.TFTableQuestionAnsweringOutput or tuple(tf.Tensor)
A transformers.models.tapas.modeling_tf_tapas.TFTableQuestionAnsweringOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TapasConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels (and possibly answer, aggregation_labels, numeric_values and numeric_values_scale are provided)) — Total loss as the sum of the hierarchical cell selection log-likelihood loss and (optionally) the
semi-supervised regression loss and (optionally) supervised loss for aggregations.
logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the cell selection head, for every token.
logits_aggregation (tf.Tensor, optional, of shape (batch_size, num_aggregation_labels)) — Prediction scores of the aggregation head, for every aggregation operator.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TFTapasForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TapasForQuestionAnswering
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("google/tapas-base-finetuned-wtq")
model = TapasForQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
data = {
... "Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
... "Age": ["56", "45", "59"],
... "Number of movies": ["87", "53", "69"],
... }
table = pd.DataFrame.from_dict(data)
queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf")
outputs = model(**inputs)
logits = outputs.logits
logits_aggregation = outputs.logits_aggregation
←Speech Encoder Decoder Models
TrOCR→
TAPAS
Overview
Usage: fine-tuning
Usage: inference
Documentation resources
TAPAS specific outputs
TapasConfig
TapasTokenizer
TapasModel
TapasForMaskedLM
TapasForSequenceClassification
TapasForQuestionAnswering
TFTapasModel
TFTapasForMaskedLM
TFTapasForSequenceClassification
TFTapasForQuestionAnswering
|
Llama2
Overview
The Llama2 model was proposed in LLaMA: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to 70B parameters, whith checkpoints finetuned for chat application!
The abstract from the paper is the following:
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed- source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Checkout all Llama2 models here
Tips:
Weights for the Llama2 models can be obtained from by filling out this form
The architecture is very similar to the first Llama, with the addition of Groupe Query Attention (GQA) following this paper
Setting config.pretraining_tp to a value different than 1 will activate the more accurate but slower computation of the linear layers, which should better match the original logits.
The original model uses pad_id = -1 which means that there is not padding token. We can’t have the same logic, make sure to add a padding token using tokenizer.add_special_tokens({"pad_token":"<pad>"}) and resize the token embedding accordingly. You should also set the model.config.pad_token_id. The embed_tokens layer of the model is initialized withself.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx), which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended.
After filling the form and gaining acces to the model checkpoints, you should be able to use the already converted checkpoints. Otherwise, if you are converting your own model, feel free to use the conversion script. The script can be called with the following (example) command:
Copied
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
Copied
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it’s thus 145GB of RAM needed.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. “Banana”), the tokenizer does not prepend the prefix space to the string.
This model was contributed by Arthur Zucker with contributions from Lysandre Debut. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
LlamaConfig
class transformers.LlamaConfig
<
source
>
(
vocab_size = 32000
hidden_size = 4096
intermediate_size = 11008
num_hidden_layers = 32
num_attention_heads = 32
num_key_value_heads = None
hidden_act = 'silu'
max_position_embeddings = 2048
initializer_range = 0.02
rms_norm_eps = 1e-06
use_cache = True
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
pretraining_tp = 1
tie_word_embeddings = False
rope_scaling = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32000) —
Vocabulary size of the LLaMA model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LlamaModel
hidden_size (int, optional, defaults to 4096) —
Dimension of the hidden representations.
intermediate_size (int, optional, defaults to 11008) —
Dimension of the MLP representations.
num_hidden_layers (int, optional, defaults to 32) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer encoder.
num_key_value_heads (int, optional) —
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
num_key_value_heads=num_attention_heads, the model will use Multi Head Attention (MHA), if
num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed by meanpooling all the original heads within that group. For more details checkout [this paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to num_attention_heads`.
pretraining_tp (int, optional, defaults to 1) —
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to this
document to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to this
issue.
hidden_act (str or function, optional, defaults to "silu") —
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the rms normalization layers.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
tie_word_embeddings(bool, optional, defaults to False) —
Whether to tie weight embeddings
rope_scaling (Dict, optional) —
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports three scaling
strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format
is {"type": strategy name, "factor": scaling factor}. When using this flag, don’t update
max_position_embeddings to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions.
Example —
This is the configuration class to store the configuration of a LlamaModel. It is used to instantiate an LLaMA
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LLaMA-7B.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import LlamaModel, LlamaConfig
# Initializing a LLaMA llama-7b style configuration
configuration = LlamaConfig()
# Initializing a model from the llama-7b style configuration
model = LlamaModel(configuration)
# Accessing the model configuration
configuration = model.config
LlamaTokenizer
class transformers.LlamaTokenizer
<
source
>
(
vocab_file
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
pad_token = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
add_bos_token = True
add_eos_token = False
clean_up_tokenization_spaces = False
legacy = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
legacy (bool, optional, defaults to True) —
Whether or not the legacy behaviour of the tokenizer should be used. Legacy is before the merge of #24622
which includes fixes to properly handle tokens that appear after special tokens. A simple example:
legacy=True:
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is
no padding token in the original model.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory
filename_prefix: typing.Optional[str] = None
)
→
Tuple(str)
Parameters
save_directory (str) —
The directory in which to save the vocabulary.
Returns
Tuple(str)
Paths to the files saved.
Save the vocabulary and special tokens file to a directory.
LlamaTokenizerFast
class transformers.LlamaTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
clean_up_tokenization_spaces = False
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
add_bos_token = True
add_eos_token = False
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .model extension) that
contains the vocabulary necessary to instantiate a tokenizer.
tokenizer_file (str) —
tokenizers file (generally has a .json extension) that
contains everything needed to load the tokenizer.
clean_up_tokenization_spaces (str, optional, defaults to False) —
Wether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra
spaces.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no normalization.
Copied
from transformers import LlamaTokenizerFast
tokenizer = LlaTokenizerFast.from_pretrained("hf-internal-testing/llama-tokenizer")
tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]
If you want to change the bos_token or the eos_token, make sure to specify them when initializing the model, or
call tokenizer.update_post_processor() to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The model input with special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens.
This implementation does not add special tokens and this method should be overridden in a subclass.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
A list of integers in the range [0, 1]
Parameters
token_ids_0 (List[int]) —
List of ids of the first sequence.
token_ids_1 (List[int], optional) —
List of ids of the second sequence.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model or encode_plus methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
update_post_processor
<
source
>
(
)
Updates the underlying post processor with the current bos_token and eos_token.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LlamaModel
class transformers.LlamaModel
<
source
>
(
config: LlamaConfig
)
Parameters
config (LlamaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
config — LlamaConfig
The bare LLaMA Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Transformer decoder consisting of config.num_hidden_layers layers. Each layer is a LlamaDecoderLayer
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The LlamaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
LlamaForCausalLM
class transformers.LlamaForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Args —
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LlamaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LlamaForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
LlamaForSequenceClassification
class transformers.LlamaForSequenceClassification
<
source
>
(
config
)
Parameters
config (LlamaConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The LLaMa Model transformer with a sequence classification head on top (linear layer).
LlamaForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
If past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
1 indicates the head is not masked,
0 indicates the head is masked.
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
The LlamaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←LLaMA
Longformer→
Llama2
Overview
LlamaConfig
LlamaTokenizer
LlamaTokenizerFast
LlamaModel
LlamaForCausalLM
LlamaForSequenceClassification
|
TimeSformer
Overview
The TimeSformer model was proposed in TimeSformer: Is Space-Time Attention All You Need for Video Understanding? by Facebook Research.
This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification papers.
The abstract from the paper is the following:
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named “TimeSformer,” adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that “divided attention,” where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: this https URL.
Tips:
There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover, the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model.
This model was contributed by fcakyon.
The original code can be found here.
Documentation resources
Video classification task guide
TimesformerConfig
class transformers.TimesformerConfig
<
source
>
(
image_size = 224
patch_size = 16
num_channels = 3
num_frames = 8
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-06
qkv_bias = True
attention_type = 'divided_space_time'
drop_path_rate = 0
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
num_frames (int, optional, defaults to 8) —
The number of frames in each video.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
attention_type (str, optional, defaults to "divided_space_time") —
The attention type to use. Must be one of "divided_space_time", "space_only", "joint_space_time".
drop_path_rate (float, optional, defaults to 0) —
The dropout ratio for stochastic depth.
This is the configuration class to store the configuration of a TimesformerModel. It is used to instantiate a
TimeSformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the TimeSformer
facebook/timesformer-base-finetuned-k600
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import TimesformerConfig, TimesformerModel
# Initializing a TimeSformer timesformer-base style configuration
configuration = TimesformerConfig()
# Initializing a model from the configuration
model = TimesformerModel(configuration)
# Accessing the model configuration
configuration = model.config
TimesformerModel
class transformers.TimesformerModel
<
source
>
(
config
)
Parameters
config (TimesformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare TimeSformer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
VideoMAEImageProcessor.preprocess() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TimesformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TimesformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import numpy as np
from transformers import AutoImageProcessor, TimesformerModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=4, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base")
model = TimesformerModel.from_pretrained("facebook/timesformer-base-finetuned-k400")
# prepare video for the model
inputs = image_processor(list(video), return_tensors="pt")
# forward pass
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 1569, 768]
TimesformerForVideoClassification
class transformers.TimesformerForVideoClassification
<
source
>
(
config
)
Parameters
config (TimesformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
TimeSformer Model transformer with a video classification head on top (a linear layer on top of the final hidden state
of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
VideoMAEImageProcessor.preprocess() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TimesformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TimesformerForVideoClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import torch
import numpy as np
from transformers import AutoImageProcessor, TimesformerForVideoClassification
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
image_processor = AutoImageProcessor.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics")
model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-base-finetuned-k400")
inputs = image_processor(list(video), return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
... logits = outputs.logits
# model predicts one of the 400 Kinetics-400 classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
eating spaghetti
←Table Transformer
UperNet→
TimeSformer
Overview
Documentation resources
TimesformerConfig
TimesformerModel
TimesformerForVideoClassification
|
OWL-ViT
Overview
The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text.
The abstract from the paper is the following:
Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub.
Usage
OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
OwlViTImageProcessor can be used to resize (or rescale) and normalize images for the model and CLIPTokenizer is used to encode the text. OwlViTProcessor wraps OwlViTImageProcessor and CLIPTokenizer into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using OwlViTProcessor and OwlViTForObjectDetection.
Copied
import requests
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
for box, score, label in zip(boxes, scores, labels):
... box = [round(i, 2) for i in box.tolist()]
... print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29]
Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17]
This model was contributed by adirik. The original code can be found here.
OwlViTConfig
class transformers.OwlViTConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 512
logit_scale_init_value = 2.6592
return_dict = True
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize OwlViTTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize OwlViTVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimensionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale parameter. Default is used as per the original OWL-ViT
implementation.
kwargs (optional) —
Dictionary of keyword arguments.
OwlViTConfig is the configuration class to store the configuration of an OwlViTModel. It is used to
instantiate an OWL-ViT model according to the specified arguments, defining the text model and vision model
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the OWL-ViT
google/owlvit-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
from_text_vision_configs
<
source
>
(
text_config: typing.Dict
vision_config: typing.Dict
**kwargs
)
→
OwlViTConfig
Returns
OwlViTConfig
An instance of a configuration object
Instantiate a OwlViTConfig (or a derived class) from owlvit text model configuration and owlvit vision
model configuration.
OwlViTTextConfig
class transformers.OwlViTTextConfig
<
source
>
(
vocab_size = 49408
hidden_size = 512
intermediate_size = 2048
num_hidden_layers = 12
num_attention_heads = 8
max_position_embeddings = 16
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
pad_token_id = 0
bos_token_id = 49406
eos_token_id = 49407
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 49408) —
Vocabulary size of the OWL-ViT text model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling OwlViTTextModel.
hidden_size (int, optional, defaults to 512) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 16) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of an OwlViTTextModel. It is used to instantiate an
OwlViT text encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the OwlViT
google/owlvit-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import OwlViTTextConfig, OwlViTTextModel
# Initializing a OwlViTTextModel with google/owlvit-base-patch32 style configuration
configuration = OwlViTTextConfig()
# Initializing a OwlViTTextConfig from the google/owlvit-base-patch32 style configuration
model = OwlViTTextModel(configuration)
# Accessing the model configuration
configuration = model.config
OwlViTVisionConfig
class transformers.OwlViTVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 768
patch_size = 32
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
num_channels (int, optional, defaults to 3) —
Number of channels in the input images.
image_size (int, optional, defaults to 768) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of an OwlViTVisionModel. It is used to instantiate
an OWL-ViT image encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the OWL-ViT
google/owlvit-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import OwlViTVisionConfig, OwlViTVisionModel
# Initializing a OwlViTVisionModel with google/owlvit-base-patch32 style configuration
configuration = OwlViTVisionConfig()
# Initializing a OwlViTVisionModel model from the google/owlvit-base-patch32 style configuration
model = OwlViTVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
OwlViTImageProcessor
class transformers.OwlViTImageProcessor
<
source
>
(
do_resize = True
size = None
resample = <Resampling.BICUBIC: 3>
do_center_crop = False
crop_size = None
do_rescale = True
rescale_factor = 0.00392156862745098
do_normalize = True
image_mean = None
image_std = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the shorter edge of the input to a certain size.
size (Dict[str, int], optional, defaults to {“height” — 768, “width”: 768}):
The size to use for resizing the image. Only has an effect if do_resize is set to True. If size is a
sequence like (h, w), output size will be matched to this. If size is an int, then image will be resized
to (size, size).
resample (int, optional, defaults to PIL.Image.Resampling.BICUBIC) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST,
PIL.Image.Resampling.BOX, PIL.Image.Resampling.BILINEAR, PIL.Image.Resampling.HAMMING,
PIL.Image.Resampling.BICUBIC or PIL.Image.Resampling.LANCZOS. Only has an effect if do_resize is set
to True.
do_center_crop (bool, optional, defaults to False) —
Whether to crop the input at the center. If the input size is smaller than crop_size along any edge, the
image is padded with 0’s and then center cropped.
crop_size (int, optional, defaults to {“height” — 768, “width”: 768}):
The size to use for center cropping the image. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the input by a certain factor.
rescale_factor (float, optional, defaults to 1/255) —
The factor to use for rescaling the image. Only has an effect if do_rescale is set to True.
do_normalize (bool, optional, defaults to True) —
Whether or not to normalize the input with image_mean and image_std. Desired output size when applying
center-cropping. Only has an effect if do_center_crop is set to True.
image_mean (List[int], optional, defaults to [0.48145466, 0.4578275, 0.40821073]) —
The sequence of means for each channel, to be used when normalizing images.
image_std (List[int], optional, defaults to [0.26862954, 0.26130258, 0.27577711]) —
The sequence of standard deviations for each channel, to be used when normalizing images.
Constructs an OWL-ViT image processor.
This image processor inherits from ImageProcessingMixin which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = None
do_center_crop: typing.Optional[bool] = None
crop_size: typing.Union[typing.Dict[str, int], NoneType] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
The image or batch of images to be prepared.
do_resize (bool, optional, defaults to self.do_resize) —
Whether or not to resize the input. If True, will resize the input to the size specified by size.
size (Dict[str, int], optional, defaults to self.size) —
The size to resize the input to. Only has an effect if do_resize is set to True.
resample (PILImageResampling, optional, defaults to self.resample) —
The resampling filter to use when resizing the input. Only has an effect if do_resize is set to
True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether or not to center crop the input. If True, will center crop the input to the size specified by
crop_size.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
The size to center crop the input to. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether or not to rescale the input. If True, will rescale the input by dividing it by
rescale_factor.
rescale_factor (float, optional, defaults to self.rescale_factor) —
The factor to rescale the input by. Only has an effect if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether or not to normalize the input. If True, will normalize the input by subtracting image_mean
and dividing by image_std.
image_mean (Union[float, List[float]], optional, defaults to self.image_mean) —
The mean to subtract from the input when normalizing. Only has an effect if do_normalize is set to
True.
image_std (Union[float, List[float]], optional, defaults to self.image_std) —
The standard deviation to divide the input by when normalizing. Only has an effect if do_normalize is
set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: defaults to the channel dimension format of the input image.
Prepares an image or batch of images for the model.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.1
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
)
→
List[Dict]
Parameters
outputs (OwlViTObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of OwlViTForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format.
post_process_image_guided_detection
<
source
>
(
outputs
threshold = 0.6
nms_threshold = 0.3
target_sizes = None
)
→
List[Dict]
Parameters
outputs (OwlViTImageGuidedObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.6) —
Minimum confidence threshold to use to filter out predicted boxes.
nms_threshold (float, optional, defaults to 0.3) —
IoU threshold for non-maximum suppression of overlapping boxes.
target_sizes (torch.Tensor, optional) —
Tensor of shape (batch_size, 2) where each entry is the (height, width) of the corresponding image in
the batch. If set, predicted normalized bounding boxes are rescaled to the target sizes. If left to
None, predictions will not be unnormalized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model. All labels are set to None as
OwlViTForObjectDetection.image_guided_detection perform one-shot object detection.
Converts the output of OwlViTForObjectDetection.image_guided_detection() into the format expected by the COCO
api.
OwlViTFeatureExtractor
class transformers.OwlViTFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
post_process
<
source
>
(
outputs
target_sizes
)
→
List[Dict]
Parameters
outputs (OwlViTObjectDetectionOutput) —
Raw outputs of the model.
target_sizes (torch.Tensor of shape (batch_size, 2)) —
Tensor containing the size (h, w) of each image of the batch. For evaluation, this must be the original
image size (before any data augmentation). For visualization, this should be the image size after data
augment, but before padding.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of OwlViTForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format.
post_process_image_guided_detection
<
source
>
(
outputs
threshold = 0.6
nms_threshold = 0.3
target_sizes = None
)
→
List[Dict]
Parameters
outputs (OwlViTImageGuidedObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional, defaults to 0.6) —
Minimum confidence threshold to use to filter out predicted boxes.
nms_threshold (float, optional, defaults to 0.3) —
IoU threshold for non-maximum suppression of overlapping boxes.
target_sizes (torch.Tensor, optional) —
Tensor of shape (batch_size, 2) where each entry is the (height, width) of the corresponding image in
the batch. If set, predicted normalized bounding boxes are rescaled to the target sizes. If left to
None, predictions will not be unnormalized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model. All labels are set to None as
OwlViTForObjectDetection.image_guided_detection perform one-shot object detection.
Converts the output of OwlViTForObjectDetection.image_guided_detection() into the format expected by the COCO
api.
OwlViTProcessor
class transformers.OwlViTProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (OwlViTImageProcessor) —
The image processor is a required input.
tokenizer ([CLIPTokenizer, CLIPTokenizerFast]) —
The tokenizer is a required input.
Constructs an OWL-ViT processor which wraps OwlViTImageProcessor and CLIPTokenizer/CLIPTokenizerFast
into a single processor that interits both the image processor and tokenizer functionalities. See the
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
post_process
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to OwlViTImageProcessor.post_process(). Please refer to the docstring
of this method for more information.
post_process_image_guided_detection
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to OwlViTImageProcessor.post_process_one_shot_object_detection.
Please refer to the docstring of this method for more information.
post_process_object_detection
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to OwlViTImageProcessor.post_process_object_detection(). Please refer
to the docstring of this method for more information.
OwlViTModel
class transformers.OwlViTModel
<
source
>
(
config: OwlViTConfig
)
Parameters
This model is a PyTorch [torch.nn.Module](https —
//pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and —
behavior. —
config (OwlViTConfig): Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_base_image_embeds: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.owlvit.modeling_owlvit.OwlViTOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.owlvit.modeling_owlvit.OwlViTOutput or tuple(torch.FloatTensor)
A transformers.models.owlvit.modeling_owlvit.OwlViTOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.owlvit.configuration_owlvit.OwlViTConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds (torch.FloatTensor of shape (batch_size * num_max_text_queries, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of OwlViTTextModel.
image_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
OwlViTVisionModel.
text_model_output (TupleBaseModelOutputWithPooling) — The output of the OwlViTTextModel.
vision_model_output (BaseModelOutputWithPooling) — The output of the OwlViTVisionModel.
The OwlViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, OwlViTModel
model = OwlViTModel.from_pretrained("google/owlvit-base-patch32")
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=[["a photo of a cat", "a photo of a dog"]], images=image, return_tensors="pt")
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_max_text_queries, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.Tensor of shape (batch_size, num_max_text_queries, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of OwlViTTextModel.
The OwlViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, OwlViTModel
model = OwlViTModel.from_pretrained("google/owlvit-base-patch32")
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch32")
inputs = processor(
... text=[["a photo of a cat", "a photo of a dog"], ["photo of a astranaut"]], return_tensors="pt"
... )
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of OwlViTVisionModel.
The OwlViTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, OwlViTModel
model = OwlViTModel.from_pretrained("google/owlvit-base-patch32")
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
OwlViTTextModel
class transformers.OwlViTTextModel
<
source
>
(
config: OwlViTTextConfig
)
forward
<
source
>
(
input_ids: Tensor
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size * num_max_text_queries, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.Tensor of shape (batch_size, num_max_text_queries, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.owlvit.configuration_owlvit.OwlViTTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OwlViTTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, OwlViTTextModel
model = OwlViTTextModel.from_pretrained("google/owlvit-base-patch32")
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch32")
inputs = processor(
... text=[["a photo of a cat", "a photo of a dog"], ["photo of a astranaut"]], return_tensors="pt"
... )
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
OwlViTVisionModel
class transformers.OwlViTVisionModel
<
source
>
(
config: OwlViTVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.owlvit.configuration_owlvit.OwlViTVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The OwlViTVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, OwlViTVisionModel
model = OwlViTVisionModel.from_pretrained("google/owlvit-base-patch32")
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
OwlViTForObjectDetection
class transformers.OwlViTForObjectDetection
<
source
>
(
config: OwlViTConfig
)
forward
<
source
>
(
input_ids: Tensor
pixel_values: FloatTensor
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.owlvit.modeling_owlvit.OwlViTObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values.
input_ids (torch.LongTensor of shape (batch_size * num_max_text_queries, sequence_length), optional) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?.
attention_mask (torch.Tensor of shape (batch_size, num_max_text_queries, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_hidden_states (bool, optional) —
Whether or not to return the last hidden state. See text_model_last_hidden_state and
vision_model_last_hidden_state under returned tensors for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.owlvit.modeling_owlvit.OwlViTObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.owlvit.modeling_owlvit.OwlViTObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.owlvit.configuration_owlvit.OwlViTConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_patches, num_queries)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_patches, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process_object_detection() to retrieve the
unnormalized bounding boxes.
text_embeds (torch.FloatTensor of shape (batch_size, num_max_text_queries, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of OwlViTTextModel.
image_embeds (torch.FloatTensor of shape (batch_size, patch_size, patch_size, output_dim) — Pooled output of OwlViTVisionModel. OWL-ViT represents images as a set of image patches and computes
image embeddings for each patch.
class_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size)) — Class embeddings of all image patches. OWL-ViT represents images as a set of image patches where the total
number of patches is (image_size / patch_size)**2.
text_model_output (TupleBaseModelOutputWithPooling) — The output of the OwlViTTextModel.
vision_model_output (BaseModelOutputWithPooling) — The output of the OwlViTVisionModel.
The OwlViTForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, OwlViTForObjectDetection
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to final bounding boxes and scores
results = processor.post_process_object_detection(
... outputs=outputs, threshold=0.1, target_sizes=target_sizes
... )
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
for box, score, label in zip(boxes, scores, labels):
... box = [round(i, 2) for i in box.tolist()]
... print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29]
Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17]
image_guided_detection
<
source
>
(
pixel_values: FloatTensor
query_pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.owlvit.modeling_owlvit.OwlViTImageGuidedObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values.
query_pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values of query image(s) to be detected. Pass in one query image per target image.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.owlvit.modeling_owlvit.OwlViTImageGuidedObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.owlvit.modeling_owlvit.OwlViTImageGuidedObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.owlvit.configuration_owlvit.OwlViTConfig'>) and inputs.
logits (torch.FloatTensor of shape (batch_size, num_patches, num_queries)) — Classification logits (including no-object) for all queries.
target_pred_boxes (torch.FloatTensor of shape (batch_size, num_patches, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual target image in the batch
(disregarding possible padding). You can use post_process_object_detection() to
retrieve the unnormalized bounding boxes.
query_pred_boxes (torch.FloatTensor of shape (batch_size, num_patches, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual query image in the batch
(disregarding possible padding). You can use post_process_object_detection() to
retrieve the unnormalized bounding boxes.
image_embeds (torch.FloatTensor of shape (batch_size, patch_size, patch_size, output_dim) — Pooled output of OwlViTVisionModel. OWL-ViT represents images as a set of image patches and computes
image embeddings for each patch.
query_image_embeds (torch.FloatTensor of shape (batch_size, patch_size, patch_size, output_dim) — Pooled output of OwlViTVisionModel. OWL-ViT represents images as a set of image patches and computes
image embeddings for each patch.
class_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size)) — Class embeddings of all image patches. OWL-ViT represents images as a set of image patches where the total
number of patches is (image_size / patch_size)**2.
text_model_output (TupleBaseModelOutputWithPooling) — The output of the OwlViTTextModel.
vision_model_output (BaseModelOutputWithPooling) — The output of the OwlViTVisionModel.
The OwlViTForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, OwlViTForObjectDetection
processor = AutoProcessor.from_pretrained("google/owlvit-base-patch16")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
query_url = "http://images.cocodataset.org/val2017/000000001675.jpg"
query_image = Image.open(requests.get(query_url, stream=True).raw)
inputs = processor(images=image, query_images=query_image, return_tensors="pt")
with torch.no_grad():
... outputs = model.image_guided_detection(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process_image_guided_detection(
... outputs=outputs, threshold=0.6, nms_threshold=0.3, target_sizes=target_sizes
... )
i = 0 # Retrieve predictions for the first image
boxes, scores = results[i]["boxes"], results[i]["scores"]
for box, score in zip(boxes, scores):
... box = [round(i, 2) for i in box.tolist()]
... print(f"Detected similar object with confidence {round(score.item(), 3)} at location {box}")
Detected similar object with confidence 0.856 at location [10.94, 50.4, 315.8, 471.39]
Detected similar object with confidence 1.0 at location [334.84, 25.33, 636.16, 374.71]
←OneFormer
Perceiver→
OWL-ViT
Overview
Usage
OwlViTConfig
OwlViTTextConfig
OwlViTVisionConfig
OwlViTImageProcessor
OwlViTFeatureExtractor
OwlViTProcessor
OwlViTModel
OwlViTTextModel
OwlViTVisionModel
OwlViTForObjectDetection
|
BigBird
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
Tips:
For an in-detail explanation on how BigBird’s attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn’t support num_random_blocks = 0
BigBird is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
This model was contributed by vasudevgupta. The original code can be found
here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
BigBirdConfig
class transformers.BigBirdConfig
<
source
>
(
vocab_size = 50358
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu_new'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 4096
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
use_cache = True
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
sep_token_id = 66
attention_type = 'block_sparse'
use_bias = True
rescale_embeddings = False
block_size = 64
num_random_blocks = 3
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50358) —
Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BigBirdModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 4096) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 1024 or 2048 or 4096).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling BigBirdModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
attention_type (str, optional, defaults to "block_sparse") —
Whether to use block sparse attention (with n complexity) as introduced in paper or original attention
layer (with n^2 complexity). Possible values are "original_full" and "block_sparse".
use_bias (bool, optional, defaults to True) —
Whether to use bias in query, key, value.
rescale_embeddings (bool, optional, defaults to False) —
Whether to rescale embeddings with (hidden_size ** 0.5).
block_size (int, optional, defaults to 64) —
Size of each block. Useful only when attention_type == "block_sparse".
num_random_blocks (int, optional, defaults to 3) —
Each query is going to attend these many number of random blocks. Useful only when attention_type == "block_sparse".
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a BigBirdModel. It is used to instantiate an
BigBird model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the BigBird
google/bigbird-roberta-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BigBirdConfig, BigBirdModel
# Initializing a BigBird google/bigbird-roberta-base style configuration
configuration = BigBirdConfig()
# Initializing a model (with random weights) from the google/bigbird-roberta-base style configuration
model = BigBirdModel(configuration)
# Accessing the model configuration
configuration = model.config
BigBirdTokenizer
class transformers.BigBirdTokenizer
<
source
>
(
vocab_file
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
pad_token = '<pad>'
sep_token = '[SEP]'
mask_token = '[MASK]'
cls_token = '[CLS]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
bos_token (str, optional, defaults to "<s>") —
The begin of sequence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct a BigBird tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Big Bird sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format: :: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second
sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
BigBirdTokenizerFast
class transformers.BigBirdTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
unk_token = '<unk>'
bos_token = '<s>'
eos_token = '</s>'
pad_token = '<pad>'
sep_token = '[SEP]'
mask_token = '[MASK]'
cls_token = '[CLS]'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token
that is used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Construct a “fast” BigBird tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram. This
tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An BigBird sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Set to True if the token list is already formatted with special tokens for the model
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
BigBird specific outputs
class transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
prediction_logits: FloatTensor = None
seq_relationship_logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) —
Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of BigBirdForPreTraining.
BigBirdModel
class transformers.BigBirdModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BigBird Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The BigBirdModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BigBirdModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdModel.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BigBirdForPreTraining
class transformers.BigBirdForPreTraining
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
next_sentence_label: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
next_sentence_label (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the next sequence prediction (classification) loss. If specified, nsp loss will be
added to masked_lm loss. Input should be a sequence pair (see input_ids docstring) Indices should be in
[0, 1]:
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
kwargs (Dict[str, any], optional, defaults to {}) —
Used to hide legacy arguments that have been deprecated.
Returns
transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction
(classification) loss.
prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BigBirdForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BigBirdForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
BigBirdForCausalLM
class transformers.BigBirdForCausalLM
<
source
>
(
config
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BigBird Model with a language modeling head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The BigBirdForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BigBirdForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdForCausalLM.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
BigBirdForMaskedLM
class transformers.BigBirdForMaskedLM
<
source
>
(
config
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BigBird Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BigBirdForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BigBirdForMaskedLM
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
squad_ds = load_dataset("squad_v2", split="train")
# select random long article
LONG_ARTICLE_TARGET = squad_ds[81514]["context"]
# select random sentence
LONG_ARTICLE_TARGET[332:398]
'the highest values are very close to the theoretical maximum value'
# add mask_token
LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]")
inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt")
# long article input
list(inputs["input_ids"].shape)
[1, 919]
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
'maximum'
Copied
labels = tokenizer(LONG_ARTICLE_TARGET, return_tensors="pt")["input_ids"]
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
1.99
BigBirdForSequenceClassification
class transformers.BigBirdForSequenceClassification
<
source
>
(
config
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BigBird Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BigBirdForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BigBirdForSequenceClassification
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
model = BigBirdForSequenceClassification.from_pretrained("l-yohai/bigbird-roberta-base-mnli")
squad_ds = load_dataset("squad_v2", split="train")
LONG_ARTICLE = squad_ds[81514]["context"]
inputs = tokenizer(LONG_ARTICLE, return_tensors="pt")
# long input article
list(inputs["input_ids"].shape)
[1, 919]
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'LABEL_0'
Copied
num_labels = len(model.config.id2label)
model = BigBirdForSequenceClassification.from_pretrained(
... "l-yohai/bigbird-roberta-base-mnli", num_labels=num_labels
... )
labels = torch.tensor(1)
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
1.13
BigBirdForMultipleChoice
class transformers.BigBirdForMultipleChoice
<
source
>
(
config
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BigBird Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BigBirdForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BigBirdForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdForMultipleChoice.from_pretrained("google/bigbird-roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
BigBirdForTokenClassification
class transformers.BigBirdForTokenClassification
<
source
>
(
config
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BigBird Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BigBirdForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BigBirdForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdForTokenClassification.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
BigBirdForQuestionAnswering
class transformers.BigBirdForQuestionAnswering
<
source
>
(
config
add_pooling_layer = False
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BigBird Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
question_lengths = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
pooler_output (torch.FloatTensor of shape (batch_size, 1)) — pooler output from BigBigModel
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The BigBirdForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, BigBirdForQuestionAnswering
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
squad_ds = load_dataset("squad_v2", split="train")
# select random article and question
LONG_ARTICLE = squad_ds[81514]["context"]
QUESTION = squad_ds[81514]["question"]
QUESTION
'During daytime how high can the temperatures reach?'
inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt")
# long article and question input
list(inputs["input_ids"].shape)
[1, 929]
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
predict_answer_token = tokenizer.decode(predict_answer_token_ids)
Copied
target_start_index, target_end_index = torch.tensor([130]), torch.tensor([132])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
FlaxBigBirdModel
class transformers.FlaxBigBirdModel
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare BigBird Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdModel
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdModel.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxBigBirdForPreTraining
class transformers.FlaxBigBirdForPreTraining
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
seq_relationship_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation
before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForPreTraining
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
prediction_logits = outputs.prediction_logits
seq_relationship_logits = outputs.seq_relationship_logits
FlaxBigBirdForCausalLM
class transformers.FlaxBigBirdForCausalLM
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for
autoregressive tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForCausalLM.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
FlaxBigBirdForMaskedLM
class transformers.FlaxBigBirdForMaskedLM
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxBigBirdForSequenceClassification
class transformers.FlaxBigBirdForSequenceClassification
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForSequenceClassification.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxBigBirdForMultipleChoice
class transformers.FlaxBigBirdForMultipleChoice
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForMultipleChoice.from_pretrained("google/bigbird-roberta-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxBigBirdForTokenClassification
class transformers.FlaxBigBirdForTokenClassification
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
past_key_values: dict = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForTokenClassification.from_pretrained("google/bigbird-roberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxBigBirdForQuestionAnswering
class transformers.FlaxBigBirdForQuestionAnswering
<
source
>
(
config: BigBirdConfig
input_shape: typing.Optional[tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
Parameters
config (BigBirdConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
BigBird Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
position_ids = None
head_mask = None
question_lengths = None
params: dict = None
dropout_rng: typing.Optional[PRNGKey] = None
indices_rng: typing.Optional[PRNGKey] = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BigBirdConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
pooled_output (jnp.ndarray of shape (batch_size, hidden_size)) — pooled_output returned by FlaxBigBirdModel.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxBigBirdForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBigBirdForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base")
model = FlaxBigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←Bertweet
BigBirdPegasus→
BigBird
Overview
Documentation resources
BigBirdConfig
BigBirdTokenizer
BigBirdTokenizerFast
BigBird specific outputs
BigBirdModel
BigBirdForPreTraining
BigBirdForCausalLM
BigBirdForMaskedLM
BigBirdForSequenceClassification
BigBirdForMultipleChoice
BigBirdForTokenClassification
BigBirdForQuestionAnswering
FlaxBigBirdModel
FlaxBigBirdForPreTraining
FlaxBigBirdForCausalLM
FlaxBigBirdForMaskedLM
FlaxBigBirdForSequenceClassification
FlaxBigBirdForMultipleChoice
FlaxBigBirdForTokenClassification
FlaxBigBirdForQuestionAnswering
|
LayoutLM
Overview
The LayoutLM model was proposed in the paper LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and
Ming Zhou. It’s a simple but effective pretraining method of text and layout for document image understanding and
information extraction tasks, such as form understanding and receipt understanding. It obtains state-of-the-art results
on several downstream tasks:
form understanding: the FUNSD dataset (a collection of 199 annotated
forms comprising more than 30,000 words).
receipt understanding: the SROIE dataset (a collection of 626 receipts for
training and 347 receipts for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
The abstract from the paper is the following:
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the
widespread use of pretraining models for NLP applications, they almost exclusively focus on text-level manipulation,
while neglecting layout and style information that is vital for document image understanding. In this paper, we propose
the LayoutLM to jointly model interactions between text and layout information across scanned document images, which is
beneficial for a great number of real-world document image understanding tasks such as information extraction from
scanned documents. Furthermore, we also leverage image features to incorporate words’ visual information into LayoutLM.
To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for
document-level pretraining. It achieves new state-of-the-art results in several downstream tasks, including form
understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification
(from 93.07 to 94.42).
Tips:
In addition to input_ids, forward() also expects the input bbox, which are
the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such
as Google’s Tesseract (there’s a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where
(x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the
position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000
scale. To normalize, you can use the following function:
Copied
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows:
Copied
from PIL import Image
# Document can be a png, jpg, etc. PDFs must be converted to images.
image = Image.open(name_of_your_document).convert("RGB")
width, height = image.size
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLM. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Document Question Answering
A blog post on fine-tuning
LayoutLM for document-understanding using Keras & Hugging Face
Transformers.
A blog post on how to fine-tune LayoutLM for document-understanding using only Hugging Face Transformers.
A notebook on how to fine-tune LayoutLM on the FUNSD dataset with image embeddings.
See also: Document question answering task guide
Text Classification
A notebook on how to fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset.
Text classification task guide
Token Classification
A notebook on how to fine-tune LayoutLM for token classification on the FUNSD dataset.
Token classification task guide
Other resources
Masked language modeling task guide
🚀 Deploy
A blog post on how to Deploy LayoutLM with Hugging Face Inference Endpoints.
LayoutLMConfig
class transformers.LayoutLMConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
max_2d_position_embeddings = 1024
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the LayoutLM model. Defines the different tokens that can be represented by the
inputs_ids passed to the forward method of LayoutLMModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed into LayoutLMModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
pad_token_id (int, optional, defaults to 0) —
The value used to pad input_ids.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
max_2d_position_embeddings (int, optional, defaults to 1024) —
The maximum value that the 2D position embedding might ever used. Typically set this to something large
just in case (e.g., 1024).
This is the configuration class to store the configuration of a LayoutLMModel. It is used to instantiate a
LayoutLM model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the LayoutLM
microsoft/layoutlm-base-uncased architecture.
Configuration objects inherit from BertConfig and can be used to control the model outputs. Read the
documentation from BertConfig for more information.
Examples:
Copied
from transformers import LayoutLMConfig, LayoutLMModel
# Initializing a LayoutLM configuration
configuration = LayoutLMConfig()
# Initializing a model (with random weights) from the configuration
model = LayoutLMModel(configuration)
# Accessing the model configuration
configuration = model.config
LayoutLMTokenizer
class transformers.LayoutLMTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original LayoutLM).
Construct a LayoutLM tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A LayoutLM sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A LayoutLM
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
LayoutLMTokenizerFast
class transformers.LayoutLMTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original LayoutLM).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” LayoutLM tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A LayoutLM sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A LayoutLM
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
LayoutLMModel
class transformers.LayoutLMModel
<
source
>
(
config
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LayoutLM Model transformer outputting raw hidden-states without any specific head on top.
The LayoutLM model was proposed in LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei and
Ming Zhou.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The LayoutLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LayoutLMModel
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = LayoutLMModel.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "world"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="pt")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = torch.tensor([token_boxes])
outputs = model(
... input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids
... )
last_hidden_states = outputs.last_hidden_state
LayoutLMForMaskedLM
class transformers.LayoutLMForMaskedLM
<
source
>
(
config
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a language modeling head on top.
The LayoutLM model was proposed in LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei and
Ming Zhou.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
encoder_hidden_states = None
encoder_attention_mask = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LayoutLMForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = LayoutLMForMaskedLM.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "[MASK]"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="pt")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = torch.tensor([token_boxes])
labels = tokenizer("Hello world", return_tensors="pt")["input_ids"]
outputs = model(
... input_ids=input_ids,
... bbox=bbox,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=labels,
... )
loss = outputs.loss
LayoutLMForSequenceClassification
class transformers.LayoutLMForSequenceClassification
<
source
>
(
config
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a sequence classification head on top (a linear layer on top of the pooled output) e.g. for
document image classification tasks such as the RVL-CDIP dataset.
The LayoutLM model was proposed in LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei and
Ming Zhou.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LayoutLMForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = LayoutLMForSequenceClassification.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "world"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="pt")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = torch.tensor([token_boxes])
sequence_label = torch.tensor([1])
outputs = model(
... input_ids=input_ids,
... bbox=bbox,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=sequence_label,
... )
loss = outputs.loss
logits = outputs.logits
LayoutLMForTokenClassification
class transformers.LayoutLMForTokenClassification
<
source
>
(
config
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
sequence labeling (information extraction) tasks such as the FUNSD
dataset and the SROIE dataset.
The LayoutLM model was proposed in LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei and
Ming Zhou.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
bbox (torch.LongTensor of shape (batch_size, sequence_length, 4), optional) —
Bounding boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings-1]. Each bounding box should be a normalized version in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1,
y1) represents the position of the lower right corner. See Overview for normalization.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LayoutLMForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LayoutLMForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = LayoutLMForTokenClassification.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "world"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="pt")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = torch.tensor([token_boxes])
token_labels = torch.tensor([1, 1, 0, 0]).unsqueeze(0) # batch size of 1
outputs = model(
... input_ids=input_ids,
... bbox=bbox,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=token_labels,
... )
loss = outputs.loss
logits = outputs.logits
LayoutLMForQuestionAnswering
class transformers.LayoutLMForQuestionAnswering
<
source
>
(
config
has_visual_segment_embedding = True
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a span classification head on top for extractive question-answering tasks such as
DocVQA (a linear layer on top of the final hidden-states output to compute span start logits and span end logits).
The LayoutLM model was proposed in LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei and
Ming Zhou.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
bbox: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LayoutLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
start_positions (torch.LongTensor of shape (batch_size,), optional):
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional):
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Example:
In the example below, we prepare a question + context pair for the LayoutLM model. It will give us a prediction
of what it thinks the answer is (the span of the answer within the texts parsed from the image).
Copied
from transformers import AutoTokenizer, LayoutLMForQuestionAnswering
from datasets import load_dataset
import torch
tokenizer = AutoTokenizer.from_pretrained("impira/layoutlm-document-qa", add_prefix_space=True)
model = LayoutLMForQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="1e3ebac")
dataset = load_dataset("nielsr/funsd", split="train")
example = dataset[0]
question = "what's his name?"
words = example["words"]
boxes = example["bboxes"]
encoding = tokenizer(
... question.split(), words, is_split_into_words=True, return_token_type_ids=True, return_tensors="pt"
... )
bbox = []
for i, s, w in zip(encoding.input_ids[0], encoding.sequence_ids(0), encoding.word_ids(0)):
... if s == 1:
... bbox.append(boxes[w])
... elif i == tokenizer.sep_token_id:
... bbox.append([1000] * 4)
... else:
... bbox.append([0] * 4)
encoding["bbox"] = torch.tensor([bbox])
word_ids = encoding.word_ids(0)
outputs = model(**encoding)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
start, end = word_ids[start_scores.argmax(-1)], word_ids[end_scores.argmax(-1)]
print(" ".join(words[start : end + 1]))
M. Hamann P. Harper, P. Martinez
TFLayoutLMModel
class transformers.TFLayoutLMModel
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LayoutLM Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
bbox: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding Boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings- 1].
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFLayoutLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFLayoutLMModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = TFLayoutLMModel.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "world"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="tf")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = tf.convert_to_tensor([token_boxes])
outputs = model(
... input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids
... )
last_hidden_states = outputs.last_hidden_state
TFLayoutLMForMaskedLM
class transformers.TFLayoutLMForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
bbox: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding Boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings- 1].
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFLayoutLMForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = TFLayoutLMForMaskedLM.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "[MASK]"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="tf")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = tf.convert_to_tensor([token_boxes])
labels = tokenizer("Hello world", return_tensors="tf")["input_ids"]
outputs = model(
... input_ids=input_ids,
... bbox=bbox,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=labels,
... )
loss = outputs.loss
TFLayoutLMForSequenceClassification
class transformers.TFLayoutLMForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
bbox: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding Boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings- 1].
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFLayoutLMForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = TFLayoutLMForSequenceClassification.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "world"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="tf")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = tf.convert_to_tensor([token_boxes])
sequence_label = tf.convert_to_tensor([1])
outputs = model(
... input_ids=input_ids,
... bbox=bbox,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=sequence_label,
... )
loss = outputs.loss
logits = outputs.logits
TFLayoutLMForTokenClassification
class transformers.TFLayoutLMForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
bbox: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding Boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings- 1].
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFLayoutLMForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlm-base-uncased")
model = TFLayoutLMForTokenClassification.from_pretrained("microsoft/layoutlm-base-uncased")
words = ["Hello", "world"]
normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782]
token_boxes = []
for word, box in zip(words, normalized_word_boxes):
... word_tokens = tokenizer.tokenize(word)
... token_boxes.extend([box] * len(word_tokens))
# add bounding boxes of cls + sep tokens
token_boxes = [[0, 0, 0, 0]] + token_boxes + [[1000, 1000, 1000, 1000]]
encoding = tokenizer(" ".join(words), return_tensors="tf")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
token_type_ids = encoding["token_type_ids"]
bbox = tf.convert_to_tensor([token_boxes])
token_labels = tf.convert_to_tensor([1, 1, 0, 0])
outputs = model(
... input_ids=input_ids,
... bbox=bbox,
... attention_mask=attention_mask,
... token_type_ids=token_type_ids,
... labels=token_labels,
... )
loss = outputs.loss
logits = outputs.logits
TFLayoutLMForQuestionAnswering
class transformers.TFLayoutLMForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (LayoutLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LayoutLM Model with a span classification head on top for extractive question-answering tasks such as
DocVQA (a linear layer on top of the final hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
bbox: np.ndarray | tf.Tensor | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
bbox (Numpy array or tf.Tensor of shape (batch_size, sequence_length, 4), optional) —
Bounding Boxes of each input sequence tokens. Selected in the range [0, config.max_2d_position_embeddings- 1].
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (LayoutLMConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFLayoutLMForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, TFLayoutLMForQuestionAnswering
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("impira/layoutlm-document-qa", add_prefix_space=True)
model = TFLayoutLMForQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="1e3ebac")
dataset = load_dataset("nielsr/funsd", split="train")
example = dataset[0]
question = "what's his name?"
words = example["words"]
boxes = example["bboxes"]
encoding = tokenizer(
... question.split(), words, is_split_into_words=True, return_token_type_ids=True, return_tensors="tf"
... )
bbox = []
for i, s, w in zip(encoding.input_ids[0], encoding.sequence_ids(0), encoding.word_ids(0)):
... if s == 1:
... bbox.append(boxes[w])
... elif i == tokenizer.sep_token_id:
... bbox.append([1000] * 4)
... else:
... bbox.append([0] * 4)
encoding["bbox"] = tf.convert_to_tensor([bbox])
word_ids = encoding.word_ids(0)
outputs = model(**encoding)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
start, end = word_ids[tf.math.argmax(start_scores, -1)[0]], word_ids[tf.math.argmax(end_scores, -1)[0]]
print(" ".join(words[start : end + 1]))
M. Hamann P. Harper, P. Martinez
←InstructBLIP
LayoutLMV2→
LayoutLM
Overview
Resources
LayoutLMConfig
LayoutLMTokenizer
LayoutLMTokenizerFast
LayoutLMModel
LayoutLMForMaskedLM
LayoutLMForSequenceClassification
LayoutLMForTokenClassification
LayoutLMForQuestionAnswering
TFLayoutLMModel
TFLayoutLMForMaskedLM
TFLayoutLMForSequenceClassification
TFLayoutLMForTokenClassification
TFLayoutLMForQuestionAnswering
|
CLIP
Overview
The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP
(Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be
instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing
for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
The abstract from the paper is the following:
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This
restricted form of supervision limits their generality and usability since additional labeled data is needed to specify
any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a
much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes
with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400
million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference
learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study
the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks
such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The
model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need
for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot
without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained
model weights at this https URL.
Usage
CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The CLIPImageProcessor can be used to resize (or rescale) and normalize images for the model.
The CLIPTokenizer is used to encode the text. The CLIPProcessor wraps
CLIPImageProcessor and CLIPTokenizer into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
CLIPProcessor and CLIPModel.
Copied
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
This model was contributed by valhalla. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP.
A blog post on How to fine-tune CLIP on 10,000 image-text pairs.
CLIP is supported by this example script.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
CLIPConfig
class transformers.CLIPConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 512
logit_scale_init_value = 2.6592
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize CLIPTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize CLIPVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original CLIP implementation.
kwargs (optional) —
Dictionary of keyword arguments.
CLIPConfig is the configuration class to store the configuration of a CLIPModel. It is used to instantiate
a CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating
a configuration with the defaults will yield a similar configuration to that of the CLIP
openai/clip-vit-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CLIPConfig, CLIPModel
# Initializing a CLIPConfig with openai/clip-vit-base-patch32 style configuration
configuration = CLIPConfig()
# Initializing a CLIPModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
model = CLIPModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a CLIPConfig from a CLIPTextConfig and a CLIPVisionConfig
from transformers import CLIPTextConfig, CLIPVisionConfig
# Initializing a CLIPText and CLIPVision configuration
config_text = CLIPTextConfig()
config_vision = CLIPVisionConfig()
config = CLIPConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: CLIPTextConfig
vision_config: CLIPVisionConfig
**kwargs
)
→
CLIPConfig
Returns
CLIPConfig
An instance of a configuration object
Instantiate a CLIPConfig (or a derived class) from clip text model configuration and clip vision model
configuration.
CLIPTextConfig
class transformers.CLIPTextConfig
<
source
>
(
vocab_size = 49408
hidden_size = 512
intermediate_size = 2048
projection_dim = 512
num_hidden_layers = 12
num_attention_heads = 8
max_position_embeddings = 77
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
pad_token_id = 1
bos_token_id = 49406
eos_token_id = 49407
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 49408) —
Vocabulary size of the CLIP text model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling CLIPModel.
hidden_size (int, optional, defaults to 512) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 77) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" "quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a CLIPTextModel. It is used to instantiate a CLIP
text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the text encoder of the CLIP
openai/clip-vit-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CLIPTextConfig, CLIPTextModel
# Initializing a CLIPTextConfig with openai/clip-vit-base-patch32 style configuration
configuration = CLIPTextConfig()
# Initializing a CLIPTextModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
model = CLIPTextModel(configuration)
# Accessing the model configuration
configuration = model.config
CLIPVisionConfig
class transformers.CLIPVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
projection_dim = 512
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 224
patch_size = 32
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a CLIPVisionModel. It is used to instantiate a
CLIP vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the vision encoder of the CLIP
openai/clip-vit-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import CLIPVisionConfig, CLIPVisionModel
# Initializing a CLIPVisionConfig with openai/clip-vit-base-patch32 style configuration
configuration = CLIPVisionConfig()
# Initializing a CLIPVisionModel (with random weights) from the openai/clip-vit-base-patch32 style configuration
model = CLIPVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
CLIPTokenizer
class transformers.CLIPTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
unk_token = '<|endoftext|>'
bos_token = '<|startoftext|>'
eos_token = '<|endoftext|>'
pad_token = '<|endoftext|>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|startoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A CLIP sequence has the following format:
single sequence: <|startoftext|> X <|endoftext|>
Pairs of sequences are not the expected use case, but they will be handled without a separator.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed. CLIP does not make use of token type ids, therefore a list of
zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
CLIPTokenizerFast
class transformers.CLIPTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
unk_token = '<|endoftext|>'
bos_token = '<|startoftext|>'
eos_token = '<|endoftext|>'
pad_token = '<|endoftext|>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
unk_token (str, optional, defaults to <|endoftext|>) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
bos_token (str, optional, defaults to <|startoftext|>) —
The beginning of sequence token.
eos_token (str, optional, defaults to <|endoftext|>) —
The end of sequence token.
Construct a “fast” CLIP tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A CLIP sequence has the following format:
single sequence: <|startoftext|> X <|endoftext|>
Pairs of sequences are not the expected use case, but they will be handled without a separator.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed. CLIP does not make use of token type ids, therefore a list of
zeros is returned.
CLIPImageProcessor
class transformers.CLIPImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the
preprocess method.
crop_size (Dict[str, int] optional, defaults to 224) —
Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess
method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in
the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess
method.
do_normalize —
Whether to normalize the image. Can be overridden by do_normalize in the preprocess method.
image_mean (float or List[float], optional, defaults to [0.48145466, 0.4578275, 0.40821073]) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to [0.26862954, 0.26130258, 0.27577711]) —
Image standard deviation.
do_convert_rgb (bool, optional, defaults to True) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a CLIP image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: int = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use for normalization. Only has an effect if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use for normalization. Only has an effect if do_normalize is set to
True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: defaults to the channel dimension format of the input image.
Preprocess an image or batch of images.
CLIPFeatureExtractor
class transformers.CLIPFeatureExtractor
<
source
>
(
*args
**kwargs
)
CLIPProcessor
class transformers.CLIPProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (CLIPImageProcessor) —
The image processor is a required input.
tokenizer (CLIPTokenizerFast) —
The tokenizer is a required input.
Constructs a CLIP processor which wraps a CLIP image processor and a CLIP tokenizer into a single processor.
CLIPProcessor offers all the functionalities of CLIPImageProcessor and CLIPTokenizerFast. See the
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
CLIPModel
class transformers.CLIPModel
<
source
>
(
config: CLIPConfig
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clip.modeling_clip.CLIPOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clip.modeling_clip.CLIPOutput or tuple(torch.FloatTensor)
A transformers.models.clip.modeling_clip.CLIPOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of CLIPTextModel.
image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of CLIPVisionModel.
text_model_output(BaseModelOutputWithPooling):
The output of the CLIPTextModel.
vision_model_output(BaseModelOutputWithPooling):
The output of the CLIPVisionModel.
The CLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of CLIPTextModel.
The CLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of CLIPVisionModel.
The CLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
CLIPTextModel
class transformers.CLIPTextModel
<
source
>
(
config: CLIPTextConfig
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The text model from CLIP without any head or projection on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CLIPTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, CLIPTextModel
model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
CLIPTextModelWithProjection
class transformers.CLIPTextModelWithProjection
<
source
>
(
config: CLIPTextConfig
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CLIP Text Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clip.modeling_clip.CLIPTextModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clip.modeling_clip.CLIPTextModelOutput or tuple(torch.FloatTensor)
A transformers.models.clip.modeling_clip.CLIPTextModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs.
text_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The text embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CLIPTextModelWithProjection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, CLIPTextModelWithProjection
model = CLIPTextModelWithProjection.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
text_embeds = outputs.text_embeds
CLIPVisionModelWithProjection
class transformers.CLIPVisionModelWithProjection
<
source
>
(
config: CLIPVisionConfig
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
CLIP Vision Model with a projection layer on top (a linear layer on top of the pooled output).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clip.modeling_clip.CLIPVisionModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clip.modeling_clip.CLIPVisionModelOutput or tuple(torch.FloatTensor)
A transformers.models.clip.modeling_clip.CLIPVisionModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs.
image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CLIPVisionModelWithProjection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPVisionModelWithProjection
model = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
image_embeds = outputs.image_embeds
CLIPVisionModel
class transformers.CLIPVisionModel
<
source
>
(
config: CLIPVisionConfig
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The vision model from CLIP without any head or projection on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CLIPVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, CLIPVisionModel
model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
TFCLIPModel
class transformers.TFCLIPModel
<
source
>
(
*args
**kwargs
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
pixel_values: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
return_loss: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.clip.modeling_tf_clip.TFCLIPOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details.
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.clip.modeling_tf_clip.TFCLIPOutput or tuple(tf.Tensor)
A transformers.models.clip.modeling_tf_clip.TFCLIPOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(tf.Tensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(tf.Tensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(tf.Tensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of TFCLIPTextModel.
image_embeds(tf.Tensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
TFCLIPVisionModel.
text_model_output(~modeling_tf_utils.TFBaseModelOutputWithPooling):
The output of the TFCLIPTextModel.
vision_model_output(~modeling_tf_utils.TFBaseModelOutputWithPooling):
The output of the TFCLIPVisionModel.
The TFCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import tensorflow as tf
from PIL import Image
import requests
from transformers import AutoProcessor, TFCLIPModel
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = tf.nn.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
text_features (tf.Tensor of shape (batch_size, output_dim)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
text_features (tf.Tensor of shape (batch_size, output_dim)
The text embeddings obtained by applying
the projection layer to the pooled output of TFCLIPTextModel.
The TFCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFCLIPModel
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: TFModelInputType | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
image_features (tf.Tensor of shape (batch_size, output_dim)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details. output_attentions (bool, optional): Whether or not to
return the attentions tensors of all attention layers. See attentions under returned tensors for more
detail. This argument can be used only in eager mode, in graph mode the value in the config will be used
instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
image_features (tf.Tensor of shape (batch_size, output_dim)
The image embeddings obtained by applying
the projection layer to the pooled output of TFCLIPVisionModel.
The TFCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFCLIPModel
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="tf")
image_features = model.get_image_features(**inputs)
TFCLIPTextModel
class transformers.TFCLIPTextModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCLIPTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFCLIPTextModel
model = TFCLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
TFCLIPVisionModel
class transformers.TFCLIPVisionModel
<
source
>
(
*args
**kwargs
)
call
<
source
>
(
pixel_values: TFModelInputType | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
CLIPImageProcessor.call() for details. output_attentions (bool, optional): Whether or not to
return the attentions tensors of all attention layers. See attentions under returned tensors for more
detail. This argument can be used only in eager mode, in graph mode the value in the config will be used
instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCLIPVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, TFCLIPVisionModel
model = TFCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
FlaxCLIPModel
class transformers.FlaxCLIPModel
<
source
>
(
config: CLIPConfig
input_shape: typing.Optional[typing.Tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (CLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
pixel_values
attention_mask = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or tuple(torch.FloatTensor)
A transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs.
logits_per_image:(jnp.ndarray of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(jnp.ndarray of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(jnp.ndarray of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of
FlaxCLIPTextModel.
image_embeds(jnp.ndarray of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
FlaxCLIPVisionModel.
text_model_output(FlaxBaseModelOutputWithPooling):
The output of the FlaxCLIPTextModel.
vision_model_output(FlaxBaseModelOutputWithPooling):
The output of the FlaxCLIPVisionModel.
The FlaxCLIPPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import jax
from PIL import Image
import requests
from transformers import AutoProcessor, FlaxCLIPModel
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="np", padding=True
... )
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = jax.nn.softmax(logits_per_image, axis=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids
attention_mask = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train = False
)
→
text_features (jnp.ndarray of shape (batch_size, output_dim)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
Returns
text_features (jnp.ndarray of shape (batch_size, output_dim)
The text embeddings obtained by applying
the projection layer to the pooled output of FlaxCLIPTextModel.
Examples:
Copied
from transformers import AutoTokenizer, FlaxCLIPModel
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="np")
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values
params: dict = None
dropout_rng: PRNGKey = None
train = False
)
→
image_features (jnp.ndarray of shape (batch_size, output_dim)
Parameters
pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained
using AutoImageProcessor. See CLIPImageProcessor.call() for details.
Returns
image_features (jnp.ndarray of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the pooled output of FlaxCLIPVisionModel
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, FlaxCLIPModel
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="np")
image_features = model.get_image_features(**inputs)
FlaxCLIPTextModel
class transformers.FlaxCLIPTextModel
<
source
>
(
config: CLIPTextConfig
input_shape = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxCLIPTextPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxCLIPTextModel
model = FlaxCLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="np")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooler_output = outputs.pooler_output # pooled (EOS token) states
FlaxCLIPVisionModel
class transformers.FlaxCLIPVisionModel
<
source
>
(
config: CLIPVisionConfig
input_shape: typing.Optional[typing.Tuple] = None
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
__call__
<
source
>
(
pixel_values
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxCLIPVisionPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, FlaxCLIPVisionModel
model = FlaxCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32")
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="np")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooler_output = outputs.pooler_output # pooled CLS states
←Chinese-CLIP
CLIPSeg→
CLIP
Overview
Usage
Resources
CLIPConfig
CLIPTextConfig
CLIPVisionConfig
CLIPTokenizer
CLIPTokenizerFast
CLIPImageProcessor
CLIPFeatureExtractor
CLIPProcessor
CLIPModel
CLIPTextModel
CLIPTextModelWithProjection
CLIPVisionModelWithProjection
CLIPVisionModel
TFCLIPModel
TFCLIPTextModel
TFCLIPVisionModel
FlaxCLIPModel
FlaxCLIPTextModel
FlaxCLIPVisionModel
|
DiT
Overview
DiT was proposed in DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
DiT applies the self-supervised objective of BEiT (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including:
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
document layout analysis: the PubLayNet dataset (a collection of more
than 360,000 document images constructed by automatically parsing PubMed XML files).
table detection: the ICDAR 2019 cTDaR dataset (a collection of
600 training images and 240 testing images).
The abstract from the paper is the following:
Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55).
Summary of the approach. Taken from the [original paper](https://arxiv.org/abs/2203.02378).
One can directly use the weights of DiT with the AutoModel API:
Copied
from transformers import AutoModel
model = AutoModel.from_pretrained("microsoft/dit-base")
This will load the model pre-trained on masked image modeling. Note that this won’t include the language modeling head on top, used to predict visual tokens.
To include the head, you can load the weights into a BeitForMaskedImageModeling model, like so:
Copied
from transformers import BeitForMaskedImageModeling
model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base")
You can also load a fine-tuned model from the hub, like so:
Copied
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip")
This particular checkpoint was fine-tuned on RVL-CDIP, an important benchmark for document image classification.
A notebook that illustrates inference for document image classification can be found here.
As DiT’s architecture is equivalent to that of BEiT, one can refer to BEiT’s documentation page for all tips, code examples and notebooks.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT.
Image Classification
BeitForImageClassification is supported by this example script and notebook.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
←DiNAT
DPT→
DiT
Overview
Resources
|
Reformer
DISCLAIMER: This model is still a work in progress, if you see something strange, file a Github Issue.
Overview
The Reformer model was proposed in the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
The abstract from the paper is the following:
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can
be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of
Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its
complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual
layers instead of the standard residuals, which allows storing activations only once in the training process instead of
N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models
while being much more memory-efficient and much faster on long sequences.
This model was contributed by patrickvonplaten. The Authors’ code can be
found here.
Tips:
Reformer does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035.
Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices.
Replace traditional attention by LSH (local-sensitive hashing) attention (see below for more details). It’s a technique to avoid computing the full product query-key in the attention layers.
Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory).
Compute the feedforward operations by chunks and not on the whole batch.
Axial Positional Encodings
Axial Positional Encodings were first implemented in Google’s trax library
and developed by the authors of this model’s paper. In models that are treating very long input sequences, the
conventional position id encodings store an embedings vector of size ddd being the config.hidden_size for
every position i,…,nsi, \ldots, n_si,…,ns, with nsn_sns being config.max_embedding_size. This means that having
a sequence length of ns=219≈0.5Mn_s = 2^{19} \approx 0.5Mns=219≈0.5M and a config.hidden_size of d=210≈1000d = 2^{10} \approx 1000d=210≈1000
would result in a position encoding matrix:
Xi,j, with i∈[1,…,d] and j∈[1,…,ns]X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]Xi,j, with i∈[1,…,d] and j∈[1,…,ns]
which alone has over 500M parameters to store. Axial positional encodings factorize Xi,jX_{i,j}Xi,j into two matrices:
Xi,j1, with i∈[1,…,d1] and j∈[1,…,ns1]X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]Xi,j1, with i∈[1,…,d1] and j∈[1,…,ns1]
and
Xi,j2, with i∈[1,…,d2] and j∈[1,…,ns2]X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]Xi,j2, with i∈[1,…,d2] and j∈[1,…,ns2]
with:
d=d1+d2 and ns=ns1×ns2.d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .d=d1+d2 and ns=ns1×ns2.
Therefore the following holds:
Xi,j={Xi,k1,if i<d1 with k=jmod ns1Xi−d1,l2,if i≥d1 with l=⌊jns1⌋X_{i,j} = \begin{cases}
X^{1}_{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \\
X^{2}_{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor
\end{cases}Xi,j={Xi,k1,Xi−d1,l2,if i<d1 with k=jmodns1if i≥d1 with l=⌊ns1j⌋
Intuitively, this means that a position embedding vector xj∈Rdx_j \in \mathbb{R}^{d}xj∈Rd is now the composition of two
factorized embedding vectors: xk,l1+xl,k2x^1_{k, l} + x^2_{l, k}xk,l1+xl,k2, where as the config.max_embedding_size dimension
jjj is factorized into k and lk \text{ and } lk and l. This design ensures that each position embedding vector
xjx_jxj is unique.
Using the above example again, axial position encoding with d1=29,d2=29,ns1=29,ns2=210d^1 = 2^9, d^2 = 2^9, n_s^1 = 2^9, n_s^2 = 2^{10}d1=29,d2=29,ns1=29,ns2=210
can drastically reduced the number of parameters from 500 000 000 to 218+219≈7800002^{18} + 2^{19} \approx 780 000218+219≈780000 parameters, this means 85% less memory usage.
In practice, the parameter config.axial_pos_embds_dim is set to a tuple (d1,d2)(d^1, d^2)(d1,d2) which sum has to be
equal to config.hidden_size and config.axial_pos_shape is set to a tuple (ns1,ns2)(n_s^1, n_s^2)(ns1,ns2) which
product has to be equal to config.max_embedding_size, which during training has to be equal to the sequence
length of the input_ids.
LSH Self Attention
In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key
query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in
Practical and Optimal LSH for Angular Distance to assign each of the tied key
query embedding vectors to one of config.num_buckets possible buckets. The premise is that the more “similar”
key query embedding vectors (in terms of cosine similarity) are to each other, the more likely they are assigned to
the same bucket.
The accuracy of the LSH mechanism can be improved by increasing config.num_hashes or directly the argument
num_hashes of the forward function so that the output of the LSH self attention better approximates the output
of the “normal” full self attention. The buckets are then sorted and chunked into query key embedding vector chunks
each of length config.lsh_chunk_length. For each chunk, the query embedding vectors attend to its key vectors
(which are tied to themselves) and to the key embedding vectors of config.lsh_num_chunks_before previous
neighboring chunks and config.lsh_num_chunks_after following neighboring chunks.
For more information, see the original Paper or this great blog post.
Note that config.num_buckets can also be factorized into a list (nbuckets1,nbuckets2)(n_{\text{buckets}}^1,
n_{\text{buckets}}^2)(nbuckets1,nbuckets2). This way instead of assigning the query key embedding vectors to one of (1,…,nbuckets)(1,\ldots,
n_{\text{buckets}})(1,…,nbuckets) they are assigned to one of (1−1,…,nbuckets1−1,…,1−nbuckets2,…,nbuckets1−nbuckets2)(1-1,\ldots, n_{\text{buckets}}^1-1, \ldots,
1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)(1−1,…,nbuckets1−1,…,1−nbuckets2,…,nbuckets1−nbuckets2). This is crucial for very long sequences to
save memory.
When training a model from scratch, it is recommended to leave config.num_buckets=None, so that depending on the
sequence length a good value for num_buckets is calculated on the fly. This value will then automatically be
saved in the config and should be reused for inference.
Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from
O(ns×ns)\mathcal{O}(n_s \times n_s)O(ns×ns) to O(ns×log(ns))\mathcal{O}(n_s \times \log(n_s))O(ns×log(ns)), which usually represents the memory
and time bottleneck in a transformer model, with nsn_sns being the sequence length.
Local Self Attention
Local self attention is essentially a “normal” self attention layer with key, query and value projections, but is
chunked so that in each chunk of length config.local_chunk_length the query embedding vectors only attends to
the key embedding vectors in its chunk and to the key embedding vectors of config.local_num_chunks_before
previous neighboring chunks and config.local_num_chunks_after following neighboring chunks.
Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from
O(ns×ns)\mathcal{O}(n_s \times n_s)O(ns×ns) to O(ns×log(ns))\mathcal{O}(n_s \times \log(n_s))O(ns×log(ns)), which usually represents the memory
and time bottleneck in a transformer model, with nsn_sns being the sequence length.
Training
During training, we must ensure that the sequence length is set to a value that can be divided by the least common
multiple of config.lsh_chunk_length and config.local_chunk_length and that the parameters of the Axial
Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can
easily be trained on sequences as long as 64000 tokens.
For training, the ReformerModelWithLMHead should be used as follows:
Copied
input_ids = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids)[0]
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
ReformerConfig
class transformers.ReformerConfig
<
source
>
(
attention_head_size = 64
attn_layers = ['local', 'lsh', 'local', 'lsh', 'local', 'lsh']
axial_norm_std = 1.0
axial_pos_embds = True
axial_pos_shape = [64, 64]
axial_pos_embds_dim = [64, 192]
chunk_size_lm_head = 0
eos_token_id = 2
feed_forward_size = 512
hash_seed = None
hidden_act = 'relu'
hidden_dropout_prob = 0.05
hidden_size = 256
initializer_range = 0.02
is_decoder = False
layer_norm_eps = 1e-12
local_num_chunks_before = 1
local_num_chunks_after = 0
local_attention_probs_dropout_prob = 0.05
local_attn_chunk_length = 64
lsh_attn_chunk_length = 64
lsh_attention_probs_dropout_prob = 0.0
lsh_num_chunks_before = 1
lsh_num_chunks_after = 0
max_position_embeddings = 4096
num_attention_heads = 12
num_buckets = None
num_hashes = 1
pad_token_id = 0
vocab_size = 320
tie_word_embeddings = False
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
attention_head_size (int, optional, defaults to 64) —
Dimensionality of the projected key, query and value vectors
attn_layers (List[str], optional, defaults to ["local", "lsh", "local", "lsh", "local", "lsh"]) —
List of attention layer types in ascending order. It can be chosen between a LSHSelfAttention layer
("lsh") and a LocalSelfAttention layer ("local").
For more information on LSHSelfAttention layer, see LSH Self Attention. For
more information on LocalSelfAttention layer, see Local Self Attention.
axial_pos_embds (bool, optional, defaults to True) —
Whether or not to use axial position embeddings. For more information on how axial position embeddings
work, see Axial Position Encodings.
axial_norm_std (float, optional, defaults to 1.0) —
The standard deviation of the normal_initializer for initializing the weight matrices of the axial
positional encodings.
axial_pos_shape (List[int], optional, defaults to [64, 64]) —
The position dims of the axial position encodings. During training, the product of the position dims has to
be equal to the sequence length.
For more information on how axial position embeddings work, see Axial Position
Encodings.
axial_pos_embds_dim (List[int], optional, defaults to [64, 192]) —
The embedding dims of the axial position encodings. The sum of the embedding dims has to be equal to the
hidden size.
For more information on how axial position embeddings work, see Axial Position
Encodings.
chunk_size_lm_head (int, optional, defaults to 0) —
The chunk size of the final language model feed forward head layer. A chunk size of 0 means that the feed
forward layer is not chunked. A chunk size of n means that the feed forward layer processes n <
sequence_length embeddings at a time.
For more information on feed forward chunking, see How does Feed Forward Chunking
work?.
eos_token_id (int, optional, defaults to 2) —
The token id for the end-of-sentence token.
feed_forward_size (int, optional, defaults to 512) —
Dimensionality of the feed_forward layer in the residual attention block.
hash_seed (int, optional) —
Seed that can be used to make local sensitive hashing in LSHSelfAttention deterministic. This should only
be set for testing purposed. For evaluation and training purposes hash_seed should be left as None to
ensure fully random rotations in local sensitive hashing scheme.
hidden_act (str or Callable, optional, defaults to "relu") —
The non-linear activation function (function or string) in the feed forward layer in the residual attention
block. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.05) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
hidden_size (int, optional, defaults to 256) —
Dimensionality of the output hidden states of the residual attention blocks.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
is_decoder (bool, optional, defaults to False) —
Whether or not to use a causal mask in addition to the attention_mask passed to ReformerModel. When
using the Reformer for causal language modeling, this argument should be set to True.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
local_chunk_length (int, optional, defaults to 64) —
Length of chunk which attends to itself in LocalSelfAttention. Chunking reduces memory complexity from
sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk
length (chunked self attention).
local_num_chunks_before (int, optional, defaults to 1) —
Number of previous neighbouring chunks to attend to in LocalSelfAttention layer to itself.
local_num_chunks_after (int, optional, defaults to 0) —
Number of following neighbouring chunks to attend to in LocalSelfAttention layer in addition to itself.
local_attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities in LocalSelfAttention.
lsh_attn_chunk_length (int, optional, defaults to 64) —
Length of chunk which attends to itself in LSHSelfAttention. Chunking reduces memory complexity from
sequence length x sequence length (self attention) to chunk length x chunk length x sequence length / chunk
length (chunked self attention).
lsh_num_chunks_before (int, optional, defaults to 1) —
Number of previous neighbouring chunks to attend to in LSHSelfAttention layer to itself.
lsh_num_chunks_after (int, optional, defaults to 0) —
Number of following neighbouring chunks to attend to in LSHSelfAttention layer to itself.
lsh_attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities in LSHSelfAttention.
max_position_embeddings (int, optional, defaults to 4096) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
num_buckets (int or List[int], optional) —
Number of buckets, the key query vectors can be “hashed into” using the locality sensitive hashing scheme.
Each query key vector is hashed into a hash in 1, ..., num_buckets. The number of buckets can also be
factorized into a list for improved memory complexity. In this case, each query key vector is hashed into a
hash in 1-1, 1-2, ..., num_buckets[0]-1, ..., num_buckets[0]-num_buckets[1] if num_buckets is
factorized into two factors. The number of buckets (or the product the factors) should approximately equal
sequence length / lsh_chunk_length. If num_buckets not set, a good value is calculated on the fly.
num_hashes (int, optional, defaults to 1) —
Number of hashing rounds (e.g., number of random rotations) in Local Sensitive Hashing scheme. The higher
num_hashes, the more accurate the LSHSelfAttention becomes, but also the more memory and time intensive
the hashing becomes.
pad_token_id (int, optional, defaults to 0) —
The token id for the padding token.
vocab_size (int, optional, defaults to 320) —\
Vocabulary size of the Reformer model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling ReformerModel.
tie_word_embeddings (bool, optional, defaults to False) —
Whether to tie input and output embeddings.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a ReformerModel. It is used to instantiate a
Reformer model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ReFormer
google/reformer-crime-and-punishment architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import ReformerConfig, ReformerModel
# Initializing a Reformer configuration
configuration = ReformerConfig()
# Initializing a Reformer model (with random weights)
model = ReformerModel(configuration)
# Accessing the model configuration
configuration = model.config
ReformerTokenizer
class transformers.ReformerTokenizer
<
source
>
(
vocab_file
eos_token = '</s>'
unk_token = '<unk>'
additional_special_tokens = []
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct a Reformer tokenizer. Based on SentencePiece .
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
ReformerTokenizerFast
class transformers.ReformerTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
eos_token = '</s>'
unk_token = '<unk>'
additional_special_tokens = []
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer.
Construct a “fast” Reformer tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
ReformerModel
class transformers.ReformerModel
<
source
>
(
config
)
Parameters
config (ReformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Reformer Model transformer outputting raw hidden-stateswithout any specific head on top.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
num_hashes: typing.Optional[int] = None
past_buckets_states: typing.Optional[typing.List[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.reformer.modeling_reformer.ReformerModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be
a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices
are automatically padded to be a multiple of the chunk length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
num_hashes (int, optional) —
The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites
the default defined in config.num_hashes.
For more information, see num_hashes in ReformerConfig.
past_buckets_states (List[Tuple(torch.LongTensor, torch.FloatTensor)], optional) —
List of Tuple(torch.LongTensor, torch.FloatTensor of length config.n_layers, with the first element
being the previous buckets of shape (batch_size, num_heads, num_hashes, sequence_length)) and the
second being the previous hidden_states of shape (batch_size, sequence_length, hidden_size)).
Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed
up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.reformer.modeling_reformer.ReformerModelOutput or tuple(torch.FloatTensor)
A transformers.models.reformer.modeling_reformer.ReformerModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_predict, hidden_size)) — Sequence of hidden-states at the last layer of the model.
num_predict corresponds to target_mapping.shape[1]. If target_mapping is None, then num_predict
corresponds to sequence_length.
past_buckets_states (List[Tuple(torch.LongTensor, torch.FloatTensor)], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of Tuple(torch.LongTensor, torch.FloatTensor of length config.n_layers, with the first element
being the previous buckets of shape (batch_size, num_heads, num_hashes, sequence_length)) and the
second being the previous hidden_states of shape (batch_size, sequence_length, hidden_size)).
Contains precomputed buckets and hidden-states that can be used (see past_buckets_states input) to speed
up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ReformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ReformerModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ReformerModelWithLMHead
class transformers.ReformerModelWithLMHead
<
source
>
(
config
)
Parameters
config (ReformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model with a language modeling head on top.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
num_hashes: typing.Optional[int] = None
past_buckets_states: typing.Optional[typing.List[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be
a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices
are automatically padded to be a multiple of the chunk length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
num_hashes (int, optional) —
The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites
the default defined in config.num_hashes.
For more information, see num_hashes in ReformerConfig.
past_buckets_states (List[Tuple(torch.LongTensor, torch.FloatTensor)], optional) —
List of Tuple(torch.LongTensor, torch.FloatTensor of length config.n_layers, with the first element
being the previous buckets of shape (batch_size, num_heads, num_hashes, sequence_length)) and the
second being the previous hidden_states of shape (batch_size, sequence_length, hidden_size)).
Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed
up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ReformerModelWithLMHead forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, ReformerModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
ReformerForMaskedLM
class transformers.ReformerForMaskedLM
<
source
>
(
config
)
Parameters
config (ReformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model with a language modeling head on top.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
num_hashes: typing.Optional[int] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be
a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices
are automatically padded to be a multiple of the chunk length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
num_hashes (int, optional) —
The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites
the default defined in config.num_hashes.
For more information, see num_hashes in ReformerConfig.
past_buckets_states (List[Tuple(torch.LongTensor, torch.FloatTensor)], optional) —
List of Tuple(torch.LongTensor, torch.FloatTensor of length config.n_layers, with the first element
being the previous buckets of shape (batch_size, num_heads, num_hashes, sequence_length)) and the
second being the previous hidden_states of shape (batch_size, sequence_length, hidden_size)).
Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed
up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ReformerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a false checkpoint since we don’t have any available pretrained model for the masked language
modeling task with the Reformer architecture.
Example:
Copied
import torch
from transformers import AutoTokenizer, ReformerForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-reformer")
model = ReformerForMaskedLM.from_pretrained("hf-internal-testing/tiny-random-reformer")
# add mask_token
tokenizer.add_special_tokens({"mask_token": "[MASK]"})
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
# resize model's embedding matrix
model.resize_token_embeddings(new_num_tokens=model.config.vocab_size + 1)
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(
... inputs.input_ids == tokenizer.mask_token_id, labels[:, : inputs["input_ids"].shape[-1]], -100
... )
outputs = model(**inputs, labels=labels)
loss = round(outputs.loss.item(), 2)
ReformerForSequenceClassification
class transformers.ReformerForSequenceClassification
<
source
>
(
config
)
Parameters
config (ReformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
num_hashes: typing.Optional[int] = None
labels: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be
a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices
are automatically padded to be a multiple of the chunk length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
num_hashes (int, optional) —
The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites
the default defined in config.num_hashes.
For more information, see num_hashes in ReformerConfig.
past_buckets_states (List[Tuple(torch.LongTensor, torch.FloatTensor)], optional) —
List of Tuple(torch.LongTensor, torch.FloatTensor of length config.n_layers, with the first element
being the previous buckets of shape (batch_size, num_heads, num_hashes, sequence_length)) and the
second being the previous hidden_states of shape (batch_size, sequence_length, hidden_size)).
Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed
up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ReformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, ReformerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
model = ReformerForSequenceClassification.from_pretrained("google/reformer-crime-and-punishment")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
label = model.config.id2label[predicted_class_id]
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = ReformerForSequenceClassification.from_pretrained(
... "google/reformer-crime-and-punishment", num_labels=num_labels
... )
labels = torch.tensor(1)
loss = model(**inputs, labels=labels).loss
ReformerForQuestionAnswering
class transformers.ReformerForQuestionAnswering
<
source
>
(
config
)
Parameters
config (ReformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Reformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / TriviaQA
( a linear layer on top of hidden-states output to compute span start logits and span end logits.
Reformer was proposed in Reformer: The Efficient Transformer by Nikita Kitaev,
Łukasz Kaiser, Anselm Levskaya.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
num_hashes: typing.Optional[int] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. During training the input_ids sequence_length has to be
a multiple of the relevant model’s chunk lengths (lsh’s, local’s or both). During evaluation, the indices
are automatically padded to be a multiple of the chunk length.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
num_hashes (int, optional) —
The number of hashing rounds that should be performed during bucketing. Setting this argument overwrites
the default defined in config.num_hashes.
For more information, see num_hashes in ReformerConfig.
past_buckets_states (List[Tuple(torch.LongTensor, torch.FloatTensor)], optional) —
List of Tuple(torch.LongTensor, torch.FloatTensor of length config.n_layers, with the first element
being the previous buckets of shape (batch_size, num_heads, num_hashes, sequence_length)) and the
second being the previous hidden_states of shape (batch_size, sequence_length, hidden_size)).
Contains precomputed hidden-states and buckets (only relevant for LSH Self-Attention). Can be used to speed
up sequential decoding.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ReformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ReformerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ReformerForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
model = ReformerForQuestionAnswering.from_pretrained("google/reformer-crime-and-punishment")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←REALM
RemBERT→
Reformer
Overview
Axial Positional Encodings
LSH Self Attention
Local Self Attention
Training
Documentation resources
ReformerConfig
ReformerTokenizer
ReformerTokenizerFast
ReformerModel
ReformerModelWithLMHead
ReformerForMaskedLM
ReformerForSequenceClassification
ReformerForQuestionAnswering
|
NLLB
DISCLAIMER: The default behaviour for the tokenizer has recently been fixed (and thus changed)!
The previous version adds [self.eos_token_id, self.cur_lang_code] at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) :
Note that we prefix the source sequence with the source language, as opposed to the target
language as previously done in several works (Arivazhagan et al., 2019; Johnson et al.,
2017). This is primarily because we prioritize optimizing zero-shot performance of our
model on any pair of 200 languages at a minor cost to supervised performance.
Previous behaviour:
Copied
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer("How was your day?").input_ids
[13374, 1398, 4260, 4039, 248130, 2, 256047]
# 2: '</s>'
# 256047 : 'eng_Latn'
New behaviour
Copied
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer("How was your day?").input_ids
[256047, 13374, 1398, 4260, 4039, 248130, 2]
Enabling the old behaviour can be done as follows:
Copied
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True)
For more details, feel free to check the linked PR and Issue.
Overview of NLLB
The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussà, James Cross, Onur Çelebi,
Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula,
Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews,
Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers,
Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
The abstract of the paper is the following:
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today.
However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the
200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by
first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed
at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of
Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training
improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using
a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety.
Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system.
This implementation contains the dense models available on release.
The sparse model NLLB-MoE (Mixture of Expert) is now available! More details here
This model was contributed by Lysandre. The authors’ code can be found here.
Generating with NLLB
While generating the target text set the forced_bos_token_id to the target language id. The following
example shows how to translate English to French using the facebook/nllb-200-distilled-600M model.
Note that we’re using the BCP-47 code for French fra_Latn. See here
for the list of all BCP-47 in the Flores 200 dataset.
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
article = "UN Chief says there is no military solution in Syria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=30
... )
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie
Generating from any other language than English
English (eng_Latn) is set as the default language from which to translate. In order to specify that you’d like to translate from a different language,
you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization.
See example below for a translation from romanian to german:
Copied
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
... "facebook/nllb-200-distilled-600M", use_auth_token=True, src_lang="ron_Latn"
... )
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", use_auth_token=True)
article = "Şeful ONU spune că nu există o soluţie militară în Siria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
... **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30
... )
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
UN-Chef sagt, es gibt keine militärische Lösung in Syrien
Documentation resources
Translation task guide
Summarization task guide
NllbTokenizer
class transformers.NllbTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
tokenizer_file = None
src_lang = None
tgt_lang = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
additional_special_tokens = None
legacy_behaviour = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenizer_file (str, optional) —
The path to a tokenizer file to use instead of the vocab file.
src_lang (str, optional) —
The language to use as source language for translation.
tgt_lang (str, optional) —
The language to use as target language for translation.
sp_model_kwargs (Dict[str, str]) —
Additional keyword arguments to pass to the model initialization.
Construct an NLLB tokenizer.
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos> for target language documents.
Examples:
Copied
from transformers import NllbTokenizer
tokenizer = NllbTokenizer.from_pretrained(
... "facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn"
... )
example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An NLLB sequence has the following format, where X represents the sequence:
input_ids (for encoder) X [eos, src_lang_code]
decoder_input_ids: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
NllbTokenizerFast
class transformers.NllbTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
src_lang = None
tgt_lang = None
additional_special_tokens = None
legacy_behaviour = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenizer_file (str, optional) —
The path to a tokenizer file to use instead of the vocab file.
src_lang (str, optional) —
The language to use as source language for translation.
tgt_lang (str, optional) —
The language to use as target language for translation.
Construct a “fast” NLLB tokenizer (backed by HuggingFace’s tokenizers library). Based on
BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos> for target language documents.
Examples:
Copied
from transformers import NllbTokenizerFast
tokenizer = NllbTokenizerFast.from_pretrained(
... "facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn"
... )
example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. The special tokens depend on calling set_lang.
An NLLB sequence has the following format, where X represents the sequence:
input_ids (for encoder) X [eos, src_lang_code]
decoder_input_ids: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
separator.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. nllb does not
make use of token type ids, therefore a list of zeros is returned.
set_src_lang_special_tokens
<
source
>
(
src_lang
)
Reset the special tokens to the source lang setting.
In legacy mode: No prefix and suffix=[eos, src_lang_code].
In default mode: Prefix=[src_lang_code], suffix = [eos]
set_tgt_lang_special_tokens
<
source
>
(
lang: str
)
Reset the special tokens to the target lang setting.
In legacy mode: No prefix and suffix=[eos, tgt_lang_code].
In default mode: Prefix=[tgt_lang_code], suffix = [eos]
←NEZHA
NLLB-MoE→
NLLB
|
X-CLIP
Overview
The X-CLIP model was proposed in Expanding Language-Image Pretrained Models for General Video Recognition by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
X-CLIP is a minimal extension of CLIP for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator.
The abstract from the paper is the following:
Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable “zero-shot” generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited.
Tips:
Usage of X-CLIP is identical to CLIP.
X-CLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP.
Demo notebooks for X-CLIP can be found here.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
XCLIPProcessor
class transformers.XCLIPProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (VideoMAEImageProcessor) —
The image processor is a required input.
tokenizer (CLIPTokenizerFast) —
The tokenizer is a required input.
Constructs an X-CLIP processor which wraps a VideoMAE image processor and a CLIP tokenizer into a single processor.
XCLIPProcessor offers all the functionalities of VideoMAEImageProcessor and CLIPTokenizerFast. See the
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to CLIPTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
XCLIPConfig
class transformers.XCLIPConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 512
prompt_layers = 2
prompt_alpha = 0.1
prompt_hidden_act = 'quick_gelu'
prompt_num_attention_heads = 8
prompt_attention_dropout = 0.0
prompt_projection_dropout = 0.0
logit_scale_init_value = 2.6592
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize XCLIPTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize XCLIPVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and vision projection layers.
prompt_layers (int, optional, defaults to 2) —
Number of layers in the video specific prompt generator.
prompt_alpha (float, optional, defaults to 0.1) —
Alpha value to use in the video specific prompt generator.
prompt_hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the video specific prompt generator. If string,
"gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported.
prompt_num_attention_heads (int, optional, defaults to 8) —
Number of attention heads in the cross-attention of the video specific prompt generator.
prompt_attention_dropout (float, optional, defaults to 0.0) —
The dropout probability for the attention layers in the video specific prompt generator.
prompt_projection_dropout (float, optional, defaults to 0.0) —
The dropout probability for the projection layers in the video specific prompt generator.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale parameter. Default is used as per the original XCLIP implementation.
kwargs (optional) —
Dictionary of keyword arguments.
XCLIPConfig is the configuration class to store the configuration of a XCLIPModel. It is used to
instantiate X-CLIP model according to the specified arguments, defining the text model and vision model configs.
Instantiating a configuration with the defaults will yield a similar configuration to that of the X-CLIP
microsoft/xclip-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
from_text_vision_configs
<
source
>
(
text_config: XCLIPTextConfig
vision_config: XCLIPVisionConfig
**kwargs
)
→
XCLIPConfig
Returns
XCLIPConfig
An instance of a configuration object
Instantiate a XCLIPConfig (or a derived class) from xclip text model configuration and xclip vision model
configuration.
XCLIPTextConfig
class transformers.XCLIPTextConfig
<
source
>
(
vocab_size = 49408
hidden_size = 512
intermediate_size = 2048
num_hidden_layers = 12
num_attention_heads = 8
max_position_embeddings = 77
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 49408) —
Vocabulary size of the X-CLIP text model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling XCLIPModel.
hidden_size (int, optional, defaults to 512) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Transformer encoder.
max_position_embeddings (int, optional, defaults to 77) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a XCLIPModel. It is used to instantiate an X-CLIP
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the X-CLIP
microsoft/xclip-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import XCLIPTextModel, XCLIPTextConfig
# Initializing a XCLIPTextModel with microsoft/xclip-base-patch32 style configuration
configuration = XCLIPTextConfig()
# Initializing a XCLIPTextConfig from the microsoft/xclip-base-patch32 style configuration
model = XCLIPTextModel(configuration)
# Accessing the model configuration
configuration = model.config
XCLIPVisionConfig
class transformers.XCLIPVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
num_hidden_layers = 12
num_attention_heads = 12
mit_hidden_size = 512
mit_intermediate_size = 2048
mit_num_hidden_layers = 1
mit_num_attention_heads = 8
num_channels = 3
image_size = 224
patch_size = 32
num_frames = 8
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
drop_path_rate = 0.0
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
mit_hidden_size (int, optional, defaults to 512) —
Dimensionality of the encoder layers of the Multiframe Integration Transformer (MIT).
mit_intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Multiframe Integration Transformer
(MIT).
mit_num_hidden_layers (int, optional, defaults to 1) —
Number of hidden layers in the Multiframe Integration Transformer (MIT).
mit_num_attention_heads (int, optional, defaults to 8) —
Number of attention heads for each attention layer in the Multiframe Integration Transformer (MIT).
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu", "gelu_new" and `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
drop_path_rate (float, optional, defaults to 0.0) —
Stochastic depth rate.
This is the configuration class to store the configuration of a XCLIPModel. It is used to instantiate an X-CLIP
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the X-CLIP
microsoft/xclip-base-patch32 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import XCLIPVisionModel, XCLIPVisionConfig
# Initializing a XCLIPVisionModel with microsoft/xclip-base-patch32 style configuration
configuration = XCLIPVisionConfig()
# Initializing a XCLIPVisionModel model from the microsoft/xclip-base-patch32 style configuration
model = XCLIPVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
XCLIPModel
class transformers.XCLIPModel
<
source
>
(
config: XCLIPConfig
)
Parameters
config (XCLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.x_clip.modeling_x_clip.XCLIPOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.x_clip.modeling_x_clip.XCLIPOutput or tuple(torch.FloatTensor)
A transformers.models.x_clip.modeling_x_clip.XCLIPOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.x_clip.configuration_x_clip.XCLIPConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for video-text similarity.
logits_per_video (torch.FloatTensor of shape (video_batch_size, text_batch_size)) — The scaled dot product scores between video_embeds and text_embeds. This represents the video-text
similarity scores.
logits_per_text (torch.FloatTensor of shape (text_batch_size, video_batch_size)) — The scaled dot product scores between text_embeds and video_embeds. This represents the text-video
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of XCLIPTextModel.
video_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The video embeddings obtained by applying the projection layer to the pooled output of
XCLIPVisionModel.
text_model_output (BaseModelOutputWithPooling) — The output of the XCLIPTextModel.
vision_model_output (BaseModelOutputWithPooling) — The output of the XCLIPVisionModel.
mit_output (BaseModelOutputWithPooling) — The output of XCLIPMultiframeIntegrationTransformer (MIT for short).
The XCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import torch
import numpy as np
from transformers import AutoProcessor, AutoModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32")
model = AutoModel.from_pretrained("microsoft/xclip-base-patch32")
inputs = processor(
... text=["playing sports", "eating spaghetti", "go shopping"],
... videos=list(video),
... return_tensors="pt",
... padding=True,
... )
# forward pass
with torch.no_grad():
... outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
print(probs)
tensor([[1.9496e-04, 9.9960e-01, 2.0825e-04]])
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the pooled output of XCLIPTextModel.
The XCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32")
model = AutoModel.from_pretrained("microsoft/xclip-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
get_video_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
video_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
video_features (torch.FloatTensor of shape (batch_size, output_dim)
The video embeddings obtained by
applying the projection layer to the pooled output of XCLIPVisionModel and
XCLIPMultiframeIntegrationTransformer.
The XCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import torch
import numpy as np
from transformers import AutoProcessor, AutoModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32")
model = AutoModel.from_pretrained("microsoft/xclip-base-patch32")
inputs = processor(videos=list(video), return_tensors="pt")
video_features = model.get_video_features(**inputs)
XCLIPTextModel
class transformers.XCLIPTextModel
<
source
>
(
config: XCLIPTextConfig
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.x_clip.configuration_x_clip.XCLIPTextConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XCLIPTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, XCLIPTextModel
model = XCLIPTextModel.from_pretrained("microsoft/xclip-base-patch32")
tokenizer = AutoTokenizer.from_pretrained("microsoft/xclip-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
XCLIPVisionModel
class transformers.XCLIPVisionModel
<
source
>
(
config: XCLIPVisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See CLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.x_clip.configuration_x_clip.XCLIPVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XCLIPVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import av
import torch
import numpy as np
from transformers import AutoProcessor, XCLIPVisionModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
... converted_len = int(clip_len * frame_sample_rate)
... end_idx = np.random.randint(converted_len, seg_len)
... start_idx = end_idx - converted_len
... indices = np.linspace(start_idx, end_idx, num=clip_len)
... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
... return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
container = av.open(file_path)
# sample 16 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32")
model = XCLIPVisionModel.from_pretrained("microsoft/xclip-base-patch32")
pixel_values = processor(videos=list(video), return_tensors="pt").pixel_values
batch_size, num_frames, num_channels, height, width = pixel_values.shape
pixel_values = pixel_values.reshape(-1, num_channels, height, width)
outputs = model(pixel_values)
last_hidden_state = outputs.last_hidden_state
←VisualBERT
Decision Transformer→
X-CLIP
Overview
Resources
XCLIPProcessor
XCLIPConfig
XCLIPTextConfig
XCLIPVisionConfig
XCLIPModel
XCLIPTextModel
XCLIPVisionModel
|
Informer
Overview
The Informer model was proposed in Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
This method introduces a Probabilistic Attention mechanism to select the “active” queries rather than the “lazy” queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention.
The abstract from the paper is the following:
Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences’ dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.
This model was contributed by elisim and kashif.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Informer blog-post in HuggingFace blog: Multivariate Probabilistic Time Series Forecasting with Informer
InformerConfig
class transformers.InformerConfig
<
source
>
(
prediction_length: typing.Optional[int] = None
context_length: typing.Optional[int] = None
distribution_output: str = 'student_t'
loss: str = 'nll'
input_size: int = 1
lags_sequence: typing.List[int] = None
scaling: typing.Union[str, bool, NoneType] = 'mean'
num_dynamic_real_features: int = 0
num_static_real_features: int = 0
num_static_categorical_features: int = 0
num_time_features: int = 0
cardinality: typing.Optional[typing.List[int]] = None
embedding_dimension: typing.Optional[typing.List[int]] = None
d_model: int = 64
encoder_ffn_dim: int = 32
decoder_ffn_dim: int = 32
encoder_attention_heads: int = 2
decoder_attention_heads: int = 2
encoder_layers: int = 2
decoder_layers: int = 2
is_encoder_decoder: bool = True
activation_function: str = 'gelu'
dropout: float = 0.05
encoder_layerdrop: float = 0.1
decoder_layerdrop: float = 0.1
attention_dropout: float = 0.1
activation_dropout: float = 0.1
num_parallel_samples: int = 100
init_std: float = 0.02
use_cache = True
attention_type: str = 'prob'
sampling_factor: int = 5
distil: bool = True
**kwargs
)
Parameters
prediction_length (int) —
The prediction length for the decoder. In other words, the prediction horizon of the model. This value is
typically dictated by the dataset and we recommend to set it appropriately.
context_length (int, optional, defaults to prediction_length) —
The context length for the encoder. If None, the context length will be the same as the
prediction_length.
distribution_output (string, optional, defaults to "student_t") —
The distribution emission head for the model. Could be either “student_t”, “normal” or “negative_binomial”.
loss (string, optional, defaults to "nll") —
The loss function for the model corresponding to the distribution_output head. For parametric
distributions it is the negative log likelihood (nll) - which currently is the only supported one.
input_size (int, optional, defaults to 1) —
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
multivariate targets.
scaling (string or bool, optional defaults to "mean") —
Whether to scale the input targets via “mean” scaler, “std” scaler or no scaler if None. If True, the
scaler is set to “mean”.
lags_sequence (list[int], optional, defaults to [1, 2, 3, 4, 5, 6, 7]) —
The lags of the input time series as covariates often dictated by the frequency of the data. Default is
[1, 2, 3, 4, 5, 6, 7] but we recommend to change it based on the dataset appropriately.
num_time_features (int, optional, defaults to 0) —
The number of time features in the input time series.
num_dynamic_real_features (int, optional, defaults to 0) —
The number of dynamic real valued features.
num_static_categorical_features (int, optional, defaults to 0) —
The number of static categorical features.
num_static_real_features (int, optional, defaults to 0) —
The number of static real valued features.
cardinality (list[int], optional) —
The cardinality (number of different values) for each of the static categorical features. Should be a list
of integers, having the same length as num_static_categorical_features. Cannot be None if
num_static_categorical_features is > 0.
embedding_dimension (list[int], optional) —
The dimension of the embedding for each of the static categorical features. Should be a list of integers,
having the same length as num_static_categorical_features. Cannot be None if
num_static_categorical_features is > 0.
d_model (int, optional, defaults to 64) —
Dimensionality of the transformer layers.
encoder_layers (int, optional, defaults to 2) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 2) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 2) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 2) —
Number of attention heads for each attention layer in the Transformer decoder.
encoder_ffn_dim (int, optional, defaults to 32) —
Dimension of the “intermediate” (often named feed-forward) layer in encoder.
decoder_ffn_dim (int, optional, defaults to 32) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and decoder. If string, "gelu" and
"relu" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the encoder, and decoder.
encoder_layerdrop (float, optional, defaults to 0.1) —
The dropout probability for the attention and fully connected layers for each encoder layer.
decoder_layerdrop (float, optional, defaults to 0.1) —
The dropout probability for the attention and fully connected layers for each decoder layer.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention probabilities.
activation_dropout (float, optional, defaults to 0.1) —
The dropout probability used between the two layers of the feed-forward networks.
num_parallel_samples (int, optional, defaults to 100) —
The number of samples to generate in parallel for each time step of inference.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated normal weight initialization distribution.
use_cache (bool, optional, defaults to True) —
Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
attention_type (str, optional, defaults to “prob”) —
Attention used in encoder. This can be set to “prob” (Informer’s ProbAttention) or “full” (vanilla
transformer’s canonical self-attention).
sampling_factor (int, optional, defaults to 5) —
ProbSparse sampling factor (only makes affect when attention_type=“prob”). It is used to control the
reduced query matrix (Q_reduce) input length.
distil (bool, optional, defaults to True) —
Whether to use distilling in encoder.
This is the configuration class to store the configuration of an InformerModel. It is used to instantiate an
Informer model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Informer
huggingface/informer-tourism-monthly architecture.
Configuration objects inherit from PretrainedConfig can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import InformerConfig, InformerModel
# Initializing an Informer configuration with 12 time steps for prediction
configuration = InformerConfig(prediction_length=12)
# Randomly initializing a model (with random weights) from the configuration
model = InformerModel(configuration)
# Accessing the model configuration
configuration = model.config
InformerModel
class transformers.InformerModel
<
source
>
(
config: InformerConfig
)
Parameters
config (TimeSeriesTransformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Informer Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
past_values: Tensor
past_time_features: Tensor
past_observed_mask: Tensor
static_categorical_features: typing.Optional[torch.Tensor] = None
static_real_features: typing.Optional[torch.Tensor] = None
future_values: typing.Optional[torch.Tensor] = None
future_time_features: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size)) —
Past values of the time series, that serve as context in order to predict the future. The sequence size of
this tensor must be larger than the context_length of the model, since the model will use the larger size
to construct lag features, i.e. additional values from the past which are added in order to serve as “extra
context”.
The sequence_length here is equal to config.context_length + max(config.lags_sequence), which if no
lags_sequence is configured, is equal to config.context_length + 7 (as by default, the largest
look-back index in config.lags_sequence is 7). The property _past_length returns the actual length of
the past.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as
static_categorical_features, static_real_features, past_time_features and lags).
Optionally, missing values need to be replaced with zeros and indicated via the past_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features)) —
Required time features, which the model internally will add to past_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in
[0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) —
Optional static categorical features for which the model will learn an embedding, which it will add to the
values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) —
Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length) or (batch_size, prediction_length, input_size), optional) —
Future values of the time series, that serve as labels for the model. The future_values is what the
Transformer needs during training to learn to output, given the past_values.
The sequence length here is equal to prediction_length.
See the demo notebook and code snippets for details.
Optionally, during training any missing values need to be replaced with zeros and indicated via the
future_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) —
Required time features for the prediction window, which the model internally will add to future_values.
These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as
Fourier features). These could also be so-called “age” features, which basically help the model know “at
which point in life” a time-series is. Age features have small values for distant past time steps and
increase monotonically the more we approach the current time step. Holiday features are also a good example
of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
future_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which future_values were observed and which were missing. Mask values selected
in [0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
This mask is used to filter out missing values for the final loss calculation.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to
make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (InformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to rescale back to the original magnitude.
static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The InformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
import torch
from transformers import InformerModel
file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
batch = torch.load(file)
model = InformerModel.from_pretrained("huggingface/informer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
last_hidden_state = outputs.last_hidden_state
InformerForPrediction
class transformers.InformerForPrediction
<
source
>
(
config: InformerConfig
)
Parameters
config (TimeSeriesTransformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Informer Model with a distribution head on top for time-series forecasting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
past_values: Tensor
past_time_features: Tensor
past_observed_mask: Tensor
static_categorical_features: typing.Optional[torch.Tensor] = None
static_real_features: typing.Optional[torch.Tensor] = None
future_values: typing.Optional[torch.Tensor] = None
future_time_features: typing.Optional[torch.Tensor] = None
future_observed_mask: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size)) —
Past values of the time series, that serve as context in order to predict the future. The sequence size of
this tensor must be larger than the context_length of the model, since the model will use the larger size
to construct lag features, i.e. additional values from the past which are added in order to serve as “extra
context”.
The sequence_length here is equal to config.context_length + max(config.lags_sequence), which if no
lags_sequence is configured, is equal to config.context_length + 7 (as by default, the largest
look-back index in config.lags_sequence is 7). The property _past_length returns the actual length of
the past.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as
static_categorical_features, static_real_features, past_time_features and lags).
Optionally, missing values need to be replaced with zeros and indicated via the past_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features)) —
Required time features, which the model internally will add to past_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in
[0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) —
Optional static categorical features for which the model will learn an embedding, which it will add to the
values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) —
Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length) or (batch_size, prediction_length, input_size), optional) —
Future values of the time series, that serve as labels for the model. The future_values is what the
Transformer needs during training to learn to output, given the past_values.
The sequence length here is equal to prediction_length.
See the demo notebook and code snippets for details.
Optionally, during training any missing values need to be replaced with zeros and indicated via the
future_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) —
Required time features for the prediction window, which the model internally will add to future_values.
These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as
Fourier features). These could also be so-called “age” features, which basically help the model know “at
which point in life” a time-series is. Age features have small values for distant past time steps and
increase monotonically the more we approach the current time step. Holiday features are also a good example
of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
future_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which future_values were observed and which were missing. Mask values selected
in [0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
This mask is used to filter out missing values for the final loss calculation.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to
make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (InformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to rescale back to the original magnitude.
static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The InformerForPrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
import torch
from transformers import InformerForPrediction
file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
batch = torch.load(file)
model = InformerForPrediction.from_pretrained("huggingface/informer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
loss = outputs.loss
loss.backward()
# during inference, one only provides past values
# as well as possible additional features
# the model autoregressively generates future values
outputs = model.generate(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_time_features=batch["future_time_features"],
... )
mean_prediction = outputs.sequences.mean(dim=1)
←Autoformer
Time Series Transformer→
Informer
Overview
Resources
InformerConfig
InformerModel
InformerForPrediction
|
LeViT
Overview
The LeViT model was proposed in LeViT: Introducing Convolutions to Vision Transformers by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze. LeViT improves the Vision Transformer (ViT) in performance and efficiency by a few architectural differences such as activation maps with decreasing resolutions in Transformers and the introduction of an attention bias to integrate positional information.
The abstract from the paper is the following:
We design a family of image classification architectures that optimize the trade-off between accuracy
and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures,
which are competitive on highly parallel processing hardware. We revisit principles from the extensive
literature on convolutional neural networks to apply them to transformers, in particular activation maps
with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information
in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification.
We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of
application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable
to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect
to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU.
LeViT Architecture. Taken from the original paper.
Tips:
Compared to ViT, LeViT models use an additional distillation head to effectively learn from a teacher (which, in the LeViT paper, is a ResNet like-model). The distillation head is learned through backpropagation under supervision of a ResNet like-model. They also draw inspiration from convolution neural networks to use activation maps with decreasing resolutions to increase the efficiency.
There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state and not using the distillation head, or (2) by placing both a prediction head and distillation
head on top of the final hidden state. In that case, the prediction head is trained using regular cross-entropy between
the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation
(cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time,
one takes the average prediction between both heads as final prediction. (2) is also called “fine-tuning with distillation”,
because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds
to LevitForImageClassification and (2) corresponds to LevitForImageClassificationWithTeacher.
All released checkpoints were pre-trained and fine-tuned on ImageNet-1k
(also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
The authors of LeViT released 5 trained LeViT models, which you can directly plug into LevitModel or LevitForImageClassification.
Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). The 5 variants available are (all trained on images of size 224x224):
facebook/levit-128S, facebook/levit-128, facebook/levit-192, facebook/levit-256 and
facebook/levit-384. Note that one should use LevitImageProcessor in order to
prepare images for the model.
LevitForImageClassificationWithTeacher currently supports only inference and not training or fine-tuning.
You can check out demo notebooks regarding inference as well as fine-tuning on custom data here
(you can just replace ViTFeatureExtractor by LevitImageProcessor and ViTForImageClassification by LevitForImageClassification or LevitForImageClassificationWithTeacher).
This model was contributed by anugunj. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LeViT.
Image Classification
LevitForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LevitConfig
class transformers.LevitConfig
<
source
>
(
image_size = 224
num_channels = 3
kernel_size = 3
stride = 2
padding = 1
patch_size = 16
hidden_sizes = [128, 256, 384]
num_attention_heads = [4, 8, 12]
depths = [4, 4, 4]
key_dim = [16, 16, 16]
drop_path_rate = 0
mlp_ratio = [2, 2, 2]
attention_ratio = [2, 2, 2]
initializer_range = 0.02
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size of the input image.
num_channels (int, optional, defaults to 3) —
Number of channels in the input image.
kernel_size (int, optional, defaults to 3) —
The kernel size for the initial convolution layers of patch embedding.
stride (int, optional, defaults to 2) —
The stride size for the initial convolution layers of patch embedding.
padding (int, optional, defaults to 1) —
The padding size for the initial convolution layers of patch embedding.
patch_size (int, optional, defaults to 16) —
The patch size for embeddings.
hidden_sizes (List[int], optional, defaults to [128, 256, 384]) —
Dimension of each of the encoder blocks.
num_attention_heads (List[int], optional, defaults to [4, 8, 12]) —
Number of attention heads for each attention layer in each block of the Transformer encoder.
depths (List[int], optional, defaults to [4, 4, 4]) —
The number of layers in each encoder block.
key_dim (List[int], optional, defaults to [16, 16, 16]) —
The size of key in each of the encoder blocks.
drop_path_rate (int, optional, defaults to 0) —
The dropout probability for stochastic depths, used in the blocks of the Transformer encoder.
mlp_ratios (List[int], optional, defaults to [2, 2, 2]) —
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
encoder blocks.
attention_ratios (List[int], optional, defaults to [2, 2, 2]) —
Ratio of the size of the output dimension compared to input dimension of attention layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
This is the configuration class to store the configuration of a LevitModel. It is used to instantiate a LeViT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LeViT
facebook/levit-128S architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import LevitConfig, LevitModel
# Initializing a LeViT levit-128S style configuration
configuration = LevitConfig()
# Initializing a model (with random weights) from the levit-128S style configuration
model = LevitModel(configuration)
# Accessing the model configuration
configuration = model.config
LevitFeatureExtractor
class transformers.LevitFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
LevitImageProcessor
class transformers.LevitImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.Iterable[float], NoneType] = [0.485, 0.456, 0.406]
image_std: typing.Union[float, typing.Iterable[float], NoneType] = [0.229, 0.224, 0.225]
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Wwhether to resize the shortest edge of the input to int(256/224 *size). Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int], optional, defaults to {"shortest_edge" -- 224}):
Size of the output image after resizing. If size is a dict with keys “width” and “height”, the image will
be resized to (size["height"], size["width"]). If size is a dict with key “shortest_edge”, the shortest
edge value c is rescaled to int(c * (256/224)). The smaller edge of the image will be matched to this
value i.e, if height > width, then image will be rescaled to (size["shortest_egde"] * height / width, size["shortest_egde"]). Can be overridden by the size parameter in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether or not to center crop the input to (crop_size["height"], crop_size["width"]). Can be overridden
by the do_center_crop parameter in the preprocess method.
crop_size (Dict, optional, defaults to {"height" -- 224, "width": 224}):
Desired image size after center_crop. Can be overridden by the crop_size parameter in the preprocess
method.
do_rescale (bool, optional, defaults to True) —
Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize (bool, optional, defaults to True) —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (List[int], defaults to [0.229, 0.224, 0.225]) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (List[int], defaults to [0.485, 0.456, 0.406]) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a LeViT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample: Resampling = None
do_center_crop: typing.Optional[bool] = None
crop_size: typing.Union[typing.Dict[str, int], NoneType] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.Iterable[float], NoneType] = None
image_std: typing.Union[float, typing.Iterable[float], NoneType] = None
return_tensors: typing.Optional[transformers.utils.generic.TensorType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image or batch of images to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the output image after resizing. If size is a dict with keys “width” and “height”, the image
will be resized to (height, width). If size is a dict with key “shortest_edge”, the shortest edge value
c is rescaled to int(c (256/224)). The smaller edge of the image will be matched to this value
i.e, if height > width, then image will be rescaled to (size height / width, size).
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use when resiizing the image.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the output image after center cropping. Crops images to (crop_size[“height”],
crop_size[“width”]).
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image pixel values by rescaling_factor - typical to values between 0 and 1.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Factor to rescale the image pixel values by.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image pixel values by image_mean and image_std.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Mean to normalize the image pixel values by.
image_std (float or List[float], optional, defaults to self.image_std) —
Standard deviation to normalize the image pixel values by.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (str or ChannelDimension, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. If unset, the channel dimension format of the input
image is used. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images to be used as input to a LeViT model.
LevitModel
class transformers.LevitModel
<
source
>
(
config
)
Parameters
config (LevitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Levit model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
LevitImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LevitConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The LevitModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, LevitModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/levit-128S")
model = LevitModel.from_pretrained("facebook/levit-128S")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 16, 384]
LevitForImageClassification
class transformers.LevitForImageClassification
<
source
>
(
config
)
Parameters
config (LevitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Levit Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
LevitImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LevitConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The LevitForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, LevitForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/levit-128S")
model = LevitForImageClassification.from_pretrained("facebook/levit-128S")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
LevitForImageClassificationWithTeacher
class transformers.LevitForImageClassificationWithTeacher
<
source
>
(
config
)
Parameters
config (LevitConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
LeViT Model transformer with image classification heads on top (a linear layer on top of the final hidden state and
a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning::
This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet
supported.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.levit.modeling_levit.LevitForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
LevitImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.levit.modeling_levit.LevitForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
A transformers.models.levit.modeling_levit.LevitForImageClassificationWithTeacherOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LevitConfig) and inputs.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation_logits.
cls_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the
class token).
distillation_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the
distillation token).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
The LevitForImageClassificationWithTeacher forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, LevitForImageClassificationWithTeacher
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/levit-128S")
model = LevitForImageClassificationWithTeacher.from_pretrained("facebook/levit-128S")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←ImageGPT
Mask2Former→
LeViT
Overview
Resources
LevitConfig
LevitFeatureExtractor
LevitImageProcessor
LevitModel
LevitForImageClassification
LevitForImageClassificationWithTeacher
|
BertJapanese
Overview
The BERT models trained on Japanese text.
There are models with two different tokenization methods:
Tokenize with MeCab and WordPiece. This requires some extra dependencies, fugashi which is a wrapper around MeCab.
Tokenize into characters.
To use MecabTokenizer, you should pip install transformers["ja"] (or pip install -e .["ja"] if you install
from source) to install dependencies.
See details on cl-tohoku repository.
Example of using a model with MeCab and WordPiece tokenization:
Copied
import torch
from transformers import AutoModel, AutoTokenizer
bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
## Input Japanese Text
line = "吾輩は猫である。"
inputs = tokenizer(line, return_tensors="pt")
print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾輩 は 猫 で ある 。 [SEP]
outputs = bertjapanese(**inputs)
Example of using a model with Character tokenization:
Copied
bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")
## Input Japanese Text
line = "吾輩は猫である。"
inputs = tokenizer(line, return_tensors="pt")
print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾 輩 は 猫 で あ る 。 [SEP]
outputs = bertjapanese(**inputs)
Tips:
This implementation is the same as BERT, except for tokenization method. Refer to the documentation of BERT for more usage examples.
This model was contributed by cl-tohoku.
BertJapaneseTokenizer
class transformers.BertJapaneseTokenizer
<
source
>
(
vocab_file
spm_file = None
do_lower_case = False
do_word_tokenize = True
do_subword_tokenize = True
word_tokenizer_type = 'basic'
subword_tokenizer_type = 'wordpiece'
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
mecab_kwargs = None
sudachi_kwargs = None
jumanpp_kwargs = None
**kwargs
)
Parameters
vocab_file (str) —
Path to a one-wordpiece-per-line vocabulary file.
spm_file (str, optional) —
Path to SentencePiece file (generally has a .spm or .model
extension) that contains the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether to lower case the input. Only has an effect when do_basic_tokenize=True.
do_word_tokenize (bool, optional, defaults to True) —
Whether to do word tokenization.
do_subword_tokenize (bool, optional, defaults to True) —
Whether to do subword tokenization.
word_tokenizer_type (str, optional, defaults to "basic") —
Type of word tokenizer. Choose from [“basic”, “mecab”, “sudachi”, “jumanpp”].
subword_tokenizer_type (str, optional, defaults to "wordpiece") —
Type of subword tokenizer. Choose from [“wordpiece”, “character”, “sentencepiece”,].
mecab_kwargs (dict, optional) —
Dictionary passed to the MecabTokenizer constructor.
sudachi_kwargs (dict, optional) —
Dictionary passed to the SudachiTokenizer constructor.
jumanpp_kwargs (dict, optional) —
Dictionary passed to the JumanppTokenizer constructor.
Construct a BERT tokenizer for Japanese text.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer
to: this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence
pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
←BertGeneration
Bertweet→
BertJapanese
Overview
BertJapaneseTokenizer
|
with TrOCR is by checking the tutorial
notebooks, which show how to use the model
at inference time as well as fine-tuning on custom data.
TrOCR is pre-trained in 2 stages before being fine-tuned on downstream datasets. It achieves state-of-the-art results
on both printed (e.g. the SROIE dataset and handwritten (e.g. the IAM
Handwriting dataset text recognition tasks. For more
information, see the official models.
TrOCR is always used within the VisionEncoderDecoder framework.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with TrOCR. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on Accelerating Document AI with TrOCR.
A blog post on how to Document AI with TrOCR.
A notebook on how to finetune TrOCR on IAM Handwriting Database using Seq2SeqTrainer.
A notebook on inference with TrOCR and Gradio demo.
A notebook on finetune TrOCR on the IAM Handwriting Database using native PyTorch.
A notebook on evaluating TrOCR on the IAM test set.
Text Generation
Casual language modeling task guide.
⚡️ Inference
An interactive-demo on TrOCR handwritten character recognition.
Inference
TrOCR’s VisionEncoderDecoder model accepts images as input and makes use of
generate() to autoregressively generate text given the input image.
The [ViTImageProcessor/DeiTImageProcessor] class is responsible for preprocessing the input image and
[RobertaTokenizer/XLMRobertaTokenizer] decodes the generated target tokens to the target string. The
TrOCRProcessor wraps [ViTImageProcessor/DeiTImageProcessor] and [RobertaTokenizer/XLMRobertaTokenizer]
into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step Optical Character Recognition (OCR)
Copied
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import requests
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
# load image from the IAM dataset
url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
See the model hub to look for TrOCR checkpoints.
TrOCRConfig
class transformers.TrOCRConfig
<
source
>
(
vocab_size = 50265
d_model = 1024
decoder_layers = 12
decoder_attention_heads = 16
decoder_ffn_dim = 4096
activation_function = 'gelu'
max_position_embeddings = 512
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
decoder_start_token_id = 2
init_std = 0.02
decoder_layerdrop = 0.0
use_cache = True
scale_embedding = False
use_learned_position_embeddings = True
layernorm_embedding = True
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the TrOCR model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling TrOCRForCausalLM.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the pooler. If string, "gelu", "relu",
"silu" and "gelu_new" are supported.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
scale_embedding (bool, optional, defaults to False) —
Whether or not to scale the word embeddings by sqrt(d_model).
use_learned_position_embeddings (bool, optional, defaults to True) —
Whether or not to use learned position embeddings. If not, sinusoidal position embeddings will be used.
layernorm_embedding (bool, optional, defaults to True) —
Whether or not to use a layernorm after the word + position embeddings.
This is the configuration class to store the configuration of a TrOCRForCausalLM. It is used to instantiate an
TrOCR model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the TrOCR
microsoft/trocr-base-handwritten architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import TrOCRConfig, TrOCRForCausalLM
# Initializing a TrOCR-base style configuration
configuration = TrOCRConfig()
# Initializing a model (with random weights) from the TrOCR-base style configuration
model = TrOCRForCausalLM(configuration)
# Accessing the model configuration
configuration = model.config
TrOCRProcessor
class transformers.TrOCRProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor ([ViTImageProcessor/DeiTImageProcessor]) —
An instance of [ViTImageProcessor/DeiTImageProcessor]. The image processor is a required input.
tokenizer ([RobertaTokenizer/XLMRobertaTokenizer]) —
An instance of [RobertaTokenizer/XLMRobertaTokenizer]. The tokenizer is a required input.
Constructs a TrOCR processor which wraps a vision image processor and a TrOCR tokenizer into a single processor.
TrOCRProcessor offers all the functionalities of [ViTImageProcessor/DeiTImageProcessor] and
[RobertaTokenizer/XLMRobertaTokenizer]. See the call() and decode() for
more information.
__call__
<
source
>
(
*args
**kwargs
)
When used in normal mode, this method forwards all its arguments to AutoImageProcessor’s
__call__() and returns its output. If used in the context
as_target_processor() this method forwards all its arguments to TrOCRTokenizer’s
~TrOCRTokenizer.__call__. Please refer to the doctsring of the above two methods for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to TrOCRTokenizer’s batch_decode(). Please refer
to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to TrOCRTokenizer’s decode(). Please refer to the
docstring of this method for more information.
TrOCRForCausalLM
class transformers.TrOCRForCausalLM
<
source
>
(
config
)
Parameters
config (TrOCRConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The TrOCR Decoder with a language modeling head. Can be used as the decoder part of EncoderDecoderModel and VisionEncoderDecoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TrOCRConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import (
... TrOCRConfig,
... TrOCRProcessor,
... TrOCRForCausalLM,
... ViTConfig,
... ViTModel,
... VisionEncoderDecoderModel,
... )
import requests
from PIL import Image
# TrOCR is a decoder model and should be used within a VisionEncoderDecoderModel
# init vision2text model with random weights
encoder = ViTModel(ViTConfig())
decoder = TrOCRForCausalLM(TrOCRConfig())
model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder)
# If you want to start from the pretrained model, load the checkpoint with `VisionEncoderDecoderModel`
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
# load image from the IAM dataset
url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
text = "industry, ' Mr. Brown commented icily. ' Let us have a"
# training
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
labels = processor.tokenizer(text, return_tensors="pt").input_ids
outputs = model(pixel_values, labels=labels)
loss = outputs.loss
round(loss.item(), 2)
5.30
# inference
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
generated_text
'industry, " Mr. Brown commented icily. " Let us have a'
←TAPAS
TVLT→
TrOCR
Overview
Resources
Inference
TrOCRConfig
TrOCRProcessor
TrOCRForCausalLM
|
MatCha
Overview
MatCha has been proposed in the paper MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering, from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
The abstract of the paper states the following:
Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models’ capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.
Model description
MatCha is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
MatCha is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer.
Usage
Currently 6 checkpoints are available for MatCha:
google/matcha: the base MatCha model, used to fine-tune MatCha on downstream tasks
google/matcha-chartqa: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts.
google/matcha-plotqa-v1: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
google/matcha-plotqa-v2: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
google/matcha-chart2text-statista: MatCha model fine-tuned on Statista dataset.
google/matcha-chart2text-pew: MatCha model fine-tuned on Pew dataset.
The models finetuned on chart2text-pew and chart2text-statista are more suited for summarization, whereas the models finetuned on plotqa and chartqa are more suited for question answering.
You can use these models as follows (example on a ChatQA dataset):
Copied
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/matcha-chartqa").to(0)
processor = AutoProcessor.from_pretrained("google/matcha-chartqa")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt").to(0)
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning
To fine-tune MatCha, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faste convergence:
Copied
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
←LXMERT
MGP-STR→
MatCha
Overview
Model description
Usage
Fine-tuning
|
FLAVA
Overview
The FLAVA model was proposed in FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022.
The paper aims at creating a single unified foundation model which can work across vision, language
as well as vision-and-language multimodal tasks.
The abstract from the paper is the following:
State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety
of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal
(with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising
direction would be to use a single holistic universal model, as a “foundation”, that targets all modalities
at once — a true vision and language foundation model should be good at vision tasks, language tasks, and
cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate
impressive performance on a wide range of 35 tasks spanning these target modalities.
This model was contributed by aps. The original code can be found here.
FlavaConfig
class transformers.FlavaConfig
<
source
>
(
image_config: typing.Dict[str, typing.Any] = None
text_config: typing.Dict[str, typing.Any] = None
multimodal_config: typing.Dict[str, typing.Any] = None
image_codebook_config: typing.Dict[str, typing.Any] = None
hidden_size: int = 768
layer_norm_eps: float = 1e-12
projection_dim: int = 768
init_codebook: bool = True
logit_scale_init_value: float = 2.6592
initializer_range: float = 0.02
ce_ignore_index: int = -100
mim_weight: float = 1.0
mlm_weight: float = 1.0
global_contrastive_weight: float = 1.0
itm_weight: float = 1.0
mmm_image_weight: float = 1.0
mmm_text_weight: float = 1.0
global_backprop_contrastive: bool = True
skip_unmasked_multimodal_encoder: bool = True
return_loss: bool = True
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize FlavaTextConfig.
image_config (dict, optional) —
Dictionary of configuration options used to initialize FlavaImageConfig.
multimodal_config (dict, optional) —
Dictionary of configuration options used to initialize FlavaMultimodalConfig.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and image projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original FLAVA/CLIP
implementation.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
ce_ignore_index (int, optional, defaults to -100) —
Cross entropy index to ignore.
mim_weight (float, optional, defaults to 1.0) —
Weight to be assigned to MIM (Masked Image Modeling) unimodal loss
mlm_weight (float, optional, defaults to 1.0) —
Weight to be assigned to MLM (Masked Language Modeling) unimodal loss
global_contrastive_weight (float, optional, defaults to 1.0) —
Weight to be assigned to global contrastive cross-alignment loss.
itm_weight (float, optional, defaults to 1.0) —
Weight to be assigned to image-text matching multimodal loss.
mmm_image_weight (float, optional, defaults to 1.0) —
Weight to be assigned to MMM loss’s image part.
mmm_text_weight (float, optional, defaults to 1.0) —
Weight to be assigned to MMM loss’s text part.
global_backprop_contrastive (bool, optional, defaults to True) —
Whether to use global backpropgation through all workers in contrastive loss.
skip_unmasked_multimodal_encoder (bool, optional, defaults to True) —
Whether to skip running unmasked multimodal encoder whose outputs are not used by FLAVA losses.
return_loss (bool, optional, defaults to True) —
Whether to return loss or not
kwargs (optional) —
Dictionary of keyword arguments.
FlavaConfig is the configuration class to store the configuration of a FlavaModel. It is used to
instantiate FLAVA model according to the specified arguments, defining the text model, image model, image codebook
and multimodal model configs. Instantiating a configuration with the defaults will yield a similar configuration to
that of the FLAVA facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import FlavaConfig, FlavaModel, FlavaForPreTraining
# Initializing a FlavaConfig with style configuration
configuration = FlavaConfig()
# Initializing a FlavaModel and FlavaForPreTraining model (with random weights) from the style configuration
model = FlavaModel(configuration)
model_pre = FlavaForPreTraining(configuration)
# Accessing the model configuration
configuration = model.config
configuration_pre = model_pre.config
from_configs
<
source
>
(
image_config: FlavaImageConfig
text_config: FlavaTextConfig
multimodal_config: FlavaMultimodalConfig
image_codebook_config: FlavaImageCodebookConfig
**kwargs
)
→
FlavaConfig
Returns
FlavaConfig
An instance of a configuration object
Instantiate a FlavaConfig (or a derived class) from flava text model configuration, flava image model
configuration, flava multimodal model and flava codebook model configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
FlavaTextConfig
class transformers.FlavaTextConfig
<
source
>
(
vocab_size: int = 30522
type_vocab_size: int = 2
max_position_embeddings: int = 512
position_embedding_type: str = 'absolute'
hidden_size: int = 768
num_hidden_layers: int = 12
num_attention_heads: int = 12
intermediate_size: int = 3072
hidden_act: str = 'gelu'
hidden_dropout_prob: float = 0.0
attention_probs_dropout_prob: float = 0.0
initializer_range: float = 0.02
layer_norm_eps: float = 1e-12
pad_token_id: int = 0
qkv_bias: bool = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling FlavaTextModel.
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling FlavaTextModel. Note that even though
text encoder allows token_type_ids’s value as 2, for text-only pretraining and fine-tuning, only 1 is
used similar to RoBERTa.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048). For VL, max_length passed to model is 77.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
This is the configuration class to store the configuration of a FlavaTextModel. It is used to instantiate an
FLAVA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA
facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import FlavaTextConfig, FlavaTextModel
# Initializing a FlavaTextModel with style configuration
configuration = FlavaTextConfig()
# Initializing a FlavaTextModel model (with random weights) from the style configuration
model = FlavaTextModel(configuration)
# Accessing the model configuration
configuration = model.config
FlavaImageConfig
class transformers.FlavaImageConfig
<
source
>
(
hidden_size: int = 768
num_hidden_layers: int = 12
num_attention_heads: int = 12
intermediate_size: int = 3072
hidden_act: int = 'gelu'
hidden_dropout_prob: float = 0.0
attention_probs_dropout_prob: float = 0.0
initializer_range: float = 0.02
layer_norm_eps: float = 1e-12
image_size: int = 224
patch_size: int = 16
num_channels: int = 3
qkv_bias: bool = True
mask_token: bool = True
vocab_size: int = 8192
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
mask_token (bool, optional, defaults to True) —
Whether to use a mask token or not. Used in MIM (Masked Image Modeling) loss for FLAVA.
vocab_size (int, optional, defaults to 8192) —
Vocabulary size of the FlavaImageCodebook used in conjunction with FlavaImageModel for MIM (Masked
Image Modeling) loss for FLAVA.
This is the configuration class to store the configuration of a FlavaImageModel. It is used to instantiate an
FLAVA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA
facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import FlavaImageConfig, FlavaImageModel
# Initializing a FlavaImageModel with style configuration
configuration = FlavaImageConfig()
# Initializing a FlavaImageModel model (with random weights) from the style configuration
model = FlavaImageModel(configuration)
# Accessing the model configuration
configuration = model.config
FlavaMultimodalConfig
class transformers.FlavaMultimodalConfig
<
source
>
(
hidden_size: int = 768
num_hidden_layers: int = 6
num_attention_heads: int = 12
intermediate_size: int = 3072
hidden_act: int = 'gelu'
hidden_dropout_prob: int = 0.0
attention_probs_dropout_prob: int = 0.0
initializer_range: float = 0.02
layer_norm_eps: float = 1e-12
qkv_bias: bool = True
use_cls_token: bool = True
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
use_cls_token (bool, optional, defaults to True) —
Whether to use an extra CLS token for multimodal settings. Usually needed by the FLAVA model.
This is the configuration class to store the configuration of a FlavaMultimodalModel. It is used to instantiate
an FLAVA model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA
facebook/flava-full architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import FlavaMultimodalConfig, FlavaMultimodalModel
# Initializing a FlavaMultimodalModel with style configuration
configuration = FlavaMultimodalConfig()
# Initializing a FlavaMultimodalModel model (with random weights) from the style configuration
model = FlavaMultimodalModel(configuration)
# Accessing the model configuration
configuration = model.config
FlavaImageCodebookConfig
class transformers.FlavaImageCodebookConfig
<
source
>
(
num_groups: int = 4
input_channels: int = 3
num_blocks_per_group: int = 2
hidden_size: int = 256
vocab_size: int = 8192
freeze: int = True
initializer_range: float = 0.02
**kwargs
)
FlavaProcessor
class transformers.FlavaProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (FlavaImageProcessor) — The image processor is a required input.
tokenizer (BertTokenizerFast) — The tokenizer is a required input.
Constructs a FLAVA processor which wraps a FLAVA image processor and a FLAVA tokenizer into a single processor.
FlavaProcessor offers all the functionalities of FlavaImageProcessor and BertTokenizerFast. See the
__call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
FlavaFeatureExtractor
class transformers.FlavaFeatureExtractor
<
source
>
(
*args
**kwargs
)
FlavaImageProcessor
class transformers.FlavaImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.Iterable[float], NoneType] = None
image_std: typing.Union[float, typing.Iterable[float], NoneType] = None
return_image_mask: bool = False
input_size_patches: int = 14
total_mask_patches: int = 75
mask_group_min_patches: int = 16
mask_group_max_patches: typing.Optional[int] = None
mask_group_min_aspect_ratio: float = 0.3
mask_group_max_aspect_ratio: typing.Optional[float] = None
return_codebook_pixels: bool = False
codebook_do_resize: bool = True
codebook_size: bool = None
codebook_resample: int = <Resampling.LANCZOS: 1>
codebook_do_center_crop: bool = True
codebook_crop_size: int = None
codebook_do_rescale: bool = True
codebook_rescale_factor: typing.Union[int, float] = 0.00392156862745098
codebook_do_map_pixels: bool = True
codebook_do_normalize: bool = True
codebook_image_mean: typing.Union[float, typing.Iterable[float], NoneType] = None
codebook_image_std: typing.Union[float, typing.Iterable[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in preprocess.
size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after resizing. Can be overridden by the size parameter in preprocess.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in
preprocess.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the images. Can be overridden by the do_center_crop parameter in preprocess.
crop_size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}):
Size of image after the center crop (crop_size["height"], crop_size["width"]). Can be overridden by the
crop_size parameter in preprocess.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in preprocess.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in
preprocess.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in preprocess.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
return_image_mask (bool, optional, defaults to False) —
Whether to return the image mask. Can be overridden by the return_image_mask parameter in preprocess.
input_size_patches (int, optional, defaults to 14) —
Number of patches in the image in height and width direction. 14x14 = 196 total patches. Can be overridden
by the input_size_patches parameter in preprocess.
total_mask_patches (int, optional, defaults to 75) —
Total number of patches that should be masked. Can be overridden by the total_mask_patches parameter in
preprocess.
mask_group_min_patches (int, optional, defaults to 16) —
Minimum number of patches that should be masked. Can be overridden by the mask_group_min_patches
parameter in preprocess.
mask_group_max_patches (int, optional) —
Maximum number of patches that should be masked. Can be overridden by the mask_group_max_patches
parameter in preprocess.
mask_group_min_aspect_ratio (float, optional, defaults to 0.3) —
Minimum aspect ratio of the mask window. Can be overridden by the mask_group_min_aspect_ratio parameter
in preprocess.
mask_group_max_aspect_ratio (float, optional) —
Maximum aspect ratio of the mask window. Can be overridden by the mask_group_max_aspect_ratio parameter
in preprocess.
codebook_do_resize (bool, optional, defaults to True) —
Whether to resize the input for codebook to a certain. Can be overridden by the codebook_do_resize
parameter in preprocess. codebook_size.
codebook_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Resize the input for codebook to the given size. Can be overridden by the codebook_size parameter in
preprocess.
codebook_resample (PILImageResampling, optional, defaults to PILImageResampling.LANCZOS) —
Resampling filter to use if resizing the codebook image. Can be overridden by the codebook_resample
parameter in preprocess.
codebook_do_center_crop (bool, optional, defaults to True) —
Whether to crop the input for codebook at the center. If the input size is smaller than
codebook_crop_size along any edge, the image is padded with 0’s and then center cropped. Can be
overridden by the codebook_do_center_crop parameter in preprocess.
codebook_crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Desired output size for codebook input when applying center-cropping. Can be overridden by the
codebook_crop_size parameter in preprocess.
codebook_do_rescale (bool, optional, defaults to True) —
Whether to rescale the input for codebook by the specified scale codebook_rescale_factor. Can be
overridden by the codebook_do_rescale parameter in preprocess.
codebook_rescale_factor (int or float, optional, defaults to 1/255) —
Defines the scale factor to use if rescaling the codebook image. Can be overridden by the
codebook_rescale_factor parameter in preprocess.
codebook_do_map_pixels (bool, optional, defaults to True) —
Whether to map the pixel values of the codebook input to (1 - 2e)x + e. Can be overridden by the
codebook_do_map_pixels parameter in preprocess.
codebook_do_normalize (bool, optional, defaults to True) —
Whether or not to normalize the input for codebook with codebook_image_mean and codebook_image_std. Can
be overridden by the codebook_do_normalize parameter in preprocess.
codebook_image_mean (Optional[Union[float, Iterable[float]]], optional, defaults to [0, 0, 0]) —
The sequence of means for each channel, to be used when normalizing images for codebook. Can be overridden
by the codebook_image_mean parameter in preprocess.
codebook_image_std (Optional[Union[float, Iterable[float]]], optional, defaults to [0.5, 0.5, 0.5]) —
The sequence of standard deviations for each channel, to be used when normalizing images for codebook. Can
be overridden by the codebook_image_std parameter in preprocess.
Constructs a Flava image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: typing.Optional[bool] = None
crop_size: typing.Union[typing.Dict[str, int], NoneType] = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_image_mask: typing.Optional[bool] = None
input_size_patches: typing.Optional[int] = None
total_mask_patches: typing.Optional[int] = None
mask_group_min_patches: typing.Optional[int] = None
mask_group_max_patches: typing.Optional[int] = None
mask_group_min_aspect_ratio: typing.Optional[float] = None
mask_group_max_aspect_ratio: typing.Optional[float] = None
return_codebook_pixels: typing.Optional[bool] = None
codebook_do_resize: typing.Optional[bool] = None
codebook_size: typing.Union[typing.Dict[str, int], NoneType] = None
codebook_resample: typing.Optional[int] = None
codebook_do_center_crop: typing.Optional[bool] = None
codebook_crop_size: typing.Union[typing.Dict[str, int], NoneType] = None
codebook_do_rescale: typing.Optional[bool] = None
codebook_rescale_factor: typing.Optional[float] = None
codebook_do_map_pixels: typing.Optional[bool] = None
codebook_do_normalize: typing.Optional[bool] = None
codebook_image_mean: typing.Optional[typing.Iterable[float]] = None
codebook_image_std: typing.Optional[typing.Iterable[float]] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_image_mask (bool, optional, defaults to self.return_image_mask) —
Whether to return the image mask.
input_size_patches (int, optional, defaults to self.input_size_patches) —
Size of the patches to extract from the image.
total_mask_patches (int, optional, defaults to self.total_mask_patches) —
Total number of patches to extract from the image.
mask_group_min_patches (int, optional, defaults to self.mask_group_min_patches) —
Minimum number of patches to extract from the image.
mask_group_max_patches (int, optional, defaults to self.mask_group_max_patches) —
Maximum number of patches to extract from the image.
mask_group_min_aspect_ratio (float, optional, defaults to self.mask_group_min_aspect_ratio) —
Minimum aspect ratio of the patches to extract from the image.
mask_group_max_aspect_ratio (float, optional, defaults to self.mask_group_max_aspect_ratio) —
Maximum aspect ratio of the patches to extract from the image.
return_codebook_pixels (bool, optional, defaults to self.return_codebook_pixels) —
Whether to return the codebook pixels.
codebook_do_resize (bool, optional, defaults to self.codebook_do_resize) —
Whether to resize the codebook pixels.
codebook_size (Dict[str, int], optional, defaults to self.codebook_size) —
Size of the codebook pixels.
codebook_resample (int, optional, defaults to self.codebook_resample) —
Resampling filter to use if resizing the codebook pixels. This can be one of the enum
PILImageResampling, Only has an effect if codebook_do_resize is set to True.
codebook_do_center_crop (bool, optional, defaults to self.codebook_do_center_crop) —
Whether to center crop the codebook pixels.
codebook_crop_size (Dict[str, int], optional, defaults to self.codebook_crop_size) —
Size of the center crop of the codebook pixels. Only has an effect if codebook_do_center_crop is set
to True.
codebook_do_rescale (bool, optional, defaults to self.codebook_do_rescale) —
Whether to rescale the codebook pixels values between [0 - 1].
codebook_rescale_factor (float, optional, defaults to self.codebook_rescale_factor) —
Rescale factor to rescale the codebook pixels by if codebook_do_rescale is set to True.
codebook_do_map_pixels (bool, optional, defaults to self.codebook_do_map_pixels) —
Whether to map the codebook pixels values.
codebook_do_normalize (bool, optional, defaults to self.codebook_do_normalize) —
Whether to normalize the codebook pixels.
codebook_image_mean (float or List[float], optional, defaults to self.codebook_image_mean) —
Codebook pixels mean to normalize the codebook pixels by if codebook_do_normalize is set to True.
codebook_image_std (float or List[float], optional, defaults to self.codebook_image_std) —
Codebook pixels standard deviation to normalize the codebook pixels by if codebook_do_normalize is
set to True.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
FlavaForPreTraining
class transformers.FlavaForPreTraining
<
source
>
(
config: FlavaConfig
image_codebook: typing.Optional[torch.nn.modules.module.Module] = None
)
Parameters
config (FlavaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
image_codebook (nn.Module) — If passed, the image codebook will be set to this. Otherwise. it will
be initialized using the image_codebook_config defined in the config first as the first parameter.
The FLAVA model for pretraining which outputs losses, embeddings, logits and transformer outputs.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
input_ids_masked: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
codebook_pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
image_attention_mask: typing.Optional[torch.Tensor] = None
skip_unmasked_multimodal_encoder: bool = None
mlm_labels: typing.Optional[torch.Tensor] = None
mim_labels: typing.Optional[torch.Tensor] = None
itm_labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: bool = True
return_dict: typing.Optional[bool] = None
return_loss: typing.Optional[bool] = None
)
→
transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids_masked (torch.LongTensor of shape (batch_size, text_seq_len)) —
Indices of input sequence tokens in the vocabulary. These ones are the masked version of the original task
to be used with MLM. Indices can be obtained using AutoTokenizer along with
DataCollatorForMaskedLanguageModeling. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are input IDs?
input_ids (torch.LongTensor of shape (batch_size, text_seq_len)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
token_type_ids (torch.LongTensor of shape (batch_size, text_seq_len), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
FlavaImageProcessor.call() for details.
bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
image_attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) —
Mask to avoid performing attention on padding token indices specifically for images. Mask values selected
in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
skip_unmasked_multimodal_encoder (bool, optional) —
Skip any calculations for multimodal encoder for unmasked inputs. FLAVA pretraining doesn’t need unmasked
multimodal embeddings or outputs as of now.
mlm_labels (torch.LongTensor of shape (batch_size, text_seq_len), optional) —
Labels for computing the left-to-right language and multimodal masked modeling loss (next word prediction).
Indices should be in [-100, 0, ..., text_config.vocab_size - 1] (see input_ids docstring). Tokens with
indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., text_config.vocab_size - 1].
mim_labels (torch.LongTensor of shape (batch_size, image_num_patches), optional) —
Labels for computing the image and multimodal masked modeling loss. Indices should be in [-100, 0, ..., image_config.vocab_size - 1]. Tokens with indices set to -100 are ignored (masked), the loss is only
computed for the tokens with labels in [0, ..., image_config.vocab_size - 1]. If not passed, they are
generated automatically using the image codebook assigned to the model. By default, it uses
FlavaImageCodebook. See FlavaImageCodebook to understand how to generate mim_labels.
itm_labels (torch.LongTensor of shape (batch_size, 1), optional) —
Labels for computing the image-text matching loss. 0 means the pairs don’t match and 1 means they match.
The pairs with 0 will be skipped for calculation of MMM and global contrastive losses as well.
return_loss (bool, optional, default to None) —
Whether to return calculated loss or not.
attention_mask (torch.FloatTensor of shape (batch_size, text_seq_len), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Examples —
Returns
transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs.
loss (torch.FloatTensor, optional, returned when return_loss is True) — Total loss calculated for this model.
loss_info (FlavaLosses) — Detailed info for FLAVA Pretraining losses. Check FlavaLosses class description for the information on
the keys.
image_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of FlavaImageModel.
image_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the FlavaImageModel.
text_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids are present) — The text embeddings which are basically the pooled output of FlavaTextModel.
text_output (BaseModelOutputWithPooling, optional, returned when input_ids are present) — The output of the FlavaTextModel.
multimodal_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present and skip_unmasked_multimodal_encoder is None or False) — The multimodal embeddings which are basically the pooled output of FlavaTextModel.
multimodal_output (BaseModelOutputWithPooling, returned when input_ids and pixel_values are present and skip_unmasked_multimodal_encoder is None or False) — The output of the FlavaMultimodalModel.
image_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of FlavaImageModel. Uses bool_masked_pos
to create masked images.
image_masked_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the FlavaImageModel. Uses bool_masked_pos to create masked images.
text_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids_masked are present) — The text embeddings which are basically the pooled output of FlavaTextModel.
text_masked_output (BaseModelOutputWithPooling, optional, returned when input_ids_masked are present) — The output of the FlavaTextModel.
multimodal_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present) — The multimodal embeddings which are basically the pooled output of FlavaTextModel.
multimodal_masked_output (BaseModelOutputWithPooling, returned when input_ids_masked and pixel_values are present) — The output of the FlavaMultimodalModel.
mim_logits (torch.FloatTensor of shape (batch_size, num_image_patches, image_vocab_size) or of shape (total_masked_patches, image_vocab_size) , optional, returned when pixel_values are present and input_ids_masked are not) — The logits for MIM unimodal loss. Uses book_masked_pos to get masked patches. The flattened output is
returned when bool_masked_pos has some of the patches masked.
mlm_logits (torch.FloatTensor of shape (batch_size, text_seq_length, text_vocab_size) or of shape (total_masked_seq_length, text_vocab_size), optional, returned when input_ids_masked are present and pixel_values are not) — The logits for MLM unimodal loss. The flattened output is returned when input_ids_masked has some of
the tokens masked.
itm_logits (torch.FloatTensor of shape (batch_size, 2), optional, returned when input_ids_masked and pixel_values are present) — The logits for ITM loss. Note that ITM loss is calculated on masked pairs in FLAVA.
mmm_image_logits (torch.FloatTensor of shape (batch_size, num_image_patches, image_vocab_size) or of shape(total_masked_patches, image_vocab_size), optional, returned when pixel_values and input_ids_masked are present) — The logits for MMM image multimodal loss. Uses book_masked_pos to get masked patches. The flattened
output is returned when bool_masked_pos has some of the patches masked.
mmm_text_logits (torch.FloatTensor of shape (batch_size, text_seq_length, text_vocab_size) or of shape ((total_masked_seq_length, text_vocab_size)), *optional*, returned when pixel_valuesandinput_ids_maskedare present) -- The logits for MMM text multimodal loss. The flattened output is returned wheninput_ids_masked` has
some of the tokens masked.
contrastive_logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeddings and text_embeddings but passed through FLAVA’s
image_projection and text_projection layers respectively. This represents the image-text similarity
scores. This is calculated on unmasked images and texts.
contrastive_logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeddings and image_embeddings but passed through FLAVA’s
text_projection and image_projection layers respectively. This is calculated on unmasked images and
texts.
The FlavaForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
FlavaModel
class transformers.FlavaModel
<
source
>
(
config: FlavaConfig
)
Parameters
config (FlavaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
image_attention_mask: typing.Optional[torch.Tensor] = None
skip_multimodal_encoder: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: bool = True
return_dict: typing.Optional[bool] = None
)
→
transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
FlavaImageProcessor.call() for details.
bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
input_ids (torch.LongTensor of shape (batch_size, image_num_patches + text_seq_len)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
token_type_ids (torch.LongTensor of shape (batch_size, image_num_patches + text_seq_len), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
skip_multimodal_encoder (bool, optional) —
Skip any calculations for multimodal encoder. Useful if multimodal encoding is not going to be used.
Returns
transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor)
A transformers.models.flava.modeling_flava.FlavaModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs.
image_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of FlavaImageModel.
image_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the FlavaImageModel.
text_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids are present) — The text embeddings which are basically the pooled output of FlavaTextModel.
text_output (BaseModelOutputWithPooling, optional, returned when input_ids are present) — The output of the FlavaTextModel.
multimodal_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present and skip_multimodal_encoder is None or False) — The multimodal embeddings which are basically the pooled output of FlavaTextModel.
multimodal_output (BaseModelOutputWithPooling, returned when input_ids and pixel_values are present and skip_multimodal_encoder is None or False) — The output of the FlavaMultimodalModel.
The FlavaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, FlavaModel
model = FlavaModel.from_pretrained("facebook/flava-full")
processor = AutoProcessor.from_pretrained("facebook/flava-full")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.contrastive_logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, text_seq_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
token_type_ids (torch.LongTensor of shape (batch_size, text_seq_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
attention_mask (torch.FloatTensor of shape (batch_size, text_seq_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The FlavaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
interpolate_pos_encoding: typing.Optional[bool] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
FlavaImageProcessor.call() for details.
bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The FlavaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
FlavaImageCodebook
class transformers.FlavaImageCodebook
<
source
>
(
config: FlavaImageCodebookConfig
**kwargs: typing.Any
)
Parameters
config (FlavaImageCodebookConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The FLAVA’s image codebook model inspired from DALL-E’s original encoder. Outputs raw hidden states and can be used
to generate image tokens for an image based on DALL-E’s vocab. Used to generate labels for MIM. Use
get_codebook_indices to get image tokens for an image.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
)
get_codebook_indices
<
source
>
(
pixel_values: Tensor
)
get_codebook_probs
<
source
>
(
pixel_values: Tensor
)
FlavaTextModel
class transformers.FlavaTextModel
<
source
>
(
config: FlavaTextConfig
add_pooling_layer: bool = True
)
Parameters
config (FlavaTextConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Text Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, text_seq_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
token_type_ids (torch.LongTensor of shape (batch_size, text_seq_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
attention_mask (torch.FloatTensor of shape (batch_size, text_seq_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlavaTextConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlavaTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlavaTextModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full")
model = FlavaTextModel.from_pretrained("facebook/flava-full")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlavaImageModel
class transformers.FlavaImageModel
<
source
>
(
config: FlavaImageConfig
add_pooling_layer: bool = True
)
Parameters
config (FlavaImageConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Image Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
interpolate_pos_encoding: typing.Optional[bool] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
FlavaImageProcessor.call() for details.
bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
interpolate_pos_encoding (bool, optional) —
Whether to interpolate the pre-trained position encodings.
attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlavaImageConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlavaImageModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FlavaImageModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/flava-full")
model = FlavaImageModel.from_pretrained("facebook/flava-full")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 197, 768]
FlavaMultimodalModel
class transformers.FlavaMultimodalModel
<
source
>
(
config: FlavaMultimodalConfig
add_pooling_layer = True
)
Parameters
config (FlavaMultimodalConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare FLAVA Multimodal Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
hidden_states: Tensor
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
hidden_states (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len, hidden_size)) —
The concatenated hidden states of unimodal encoders.
attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FlavaMultimodalConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlavaMultimodalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlavaMultimodalModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full")
model = FlavaMultimodalModel.from_pretrained("facebook/flava-full")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
←Donut
GIT→
FLAVA
Overview
FlavaConfig
FlavaTextConfig
FlavaImageConfig
FlavaMultimodalConfig
FlavaImageCodebookConfig
FlavaProcessor
FlavaFeatureExtractor
FlavaImageProcessor
FlavaForPreTraining
FlavaModel
FlavaImageCodebook
FlavaTextModel
FlavaImageModel
FlavaMultimodalModel
|
Chinese-CLIP
Overview
The Chinese-CLIP model was proposed in Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released at this link.
The abstract from the paper is the following:
The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released.
Usage
The code snippet below shows how to compute image & text features and similarities:
Copied
from PIL import Image
import requests
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Squirtle, Bulbasaur, Charmander, Pikachu in English
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
# compute image feature
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute text features
inputs = processor(text=texts, padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute image-text similarity scores
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # probs: [[1.2686e-03, 5.4499e-02, 6.7968e-04, 9.4355e-01]]
Currently, we release the following scales of pretrained Chinese-CLIP models at HF Model Hub:
OFA-Sys/chinese-clip-vit-base-patch16
OFA-Sys/chinese-clip-vit-large-patch14
OFA-Sys/chinese-clip-vit-large-patch14-336px
OFA-Sys/chinese-clip-vit-huge-patch14
The Chinese-CLIP model was contributed by OFA-Sys.
ChineseCLIPConfig
class transformers.ChineseCLIPConfig
<
source
>
(
text_config = None
vision_config = None
projection_dim = 512
logit_scale_init_value = 2.6592
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize ChineseCLIPTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize ChineseCLIPVisionConfig.
projection_dim (int, optional, defaults to 512) —
Dimentionality of text and vision projection layers.
logit_scale_init_value (float, optional, defaults to 2.6592) —
The inital value of the logit_scale paramter. Default is used as per the original ChineseCLIP
implementation.
kwargs (optional) —
Dictionary of keyword arguments.
ChineseCLIPConfig is the configuration class to store the configuration of a ChineseCLIPModel. It is used
to instantiate Chinese-CLIP model according to the specified arguments, defining the text model and vision model
configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the
Chinese-CLIP OFA-Sys/chinese-clip-vit-base-patch16
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ChineseCLIPConfig, ChineseCLIPModel
# Initializing a ChineseCLIPConfig with OFA-Sys/chinese-clip-vit-base-patch16 style configuration
configuration = ChineseCLIPConfig()
# Initializing a ChineseCLIPModel (with random weights) from the OFA-Sys/chinese-clip-vit-base-patch16 style configuration
model = ChineseCLIPModel(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a ChineseCLIPConfig from a ChineseCLIPTextConfig and a ChineseCLIPVisionConfig
# Initializing a ChineseCLIPTextConfig and ChineseCLIPVisionConfig configuration
config_text = ChineseCLIPTextConfig()
config_vision = ChineseCLIPVisionConfig()
config = ChineseCLIPConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: ChineseCLIPTextConfig
vision_config: ChineseCLIPVisionConfig
**kwargs
)
Instantiate a ChineseCLIPConfig (or a derived class) from Chinese-CLIP text model configuration and
Chinese-CLIP vision model configuration. Returns:
ChineseCLIPConfig: An instance of a configuration object
ChineseCLIPTextConfig
class transformers.ChineseCLIPTextConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
initializer_factor = 1.0
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the CHINESE_CLIP model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling ChineseCLIPModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling ChineseCLIPModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
This is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate a
Chinese CLIP model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Chinese CLIP
[OFA-Sys/chinese-clip-vit-base-patch16](https:
//huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ChineseCLIPTextConfig, ChineseCLIPTextModel
# Initializing a ChineseCLIPTextConfig with OFA-Sys/chinese-clip-vit-base-patch16 style configuration
configuration = ChineseCLIPTextConfig()
# Initializing a ChineseCLIPTextModel (with random weights) from the OFA-Sys/chinese-clip-vit-base-patch16 style configuration
model = ChineseCLIPTextModel(configuration)
# Accessing the model configuration
configuration = model.config
ChineseCLIPVisionConfig
class transformers.ChineseCLIPVisionConfig
<
source
>
(
hidden_size = 768
intermediate_size = 3072
projection_dim = 512
num_hidden_layers = 12
num_attention_heads = 12
num_channels = 3
image_size = 224
patch_size = 32
hidden_act = 'quick_gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 0.02
initializer_factor = 1.0
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "quick_gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"quick_gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
This is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate an
ChineseCLIP model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ChineseCLIP
[OFA-Sys/chinese-clip-vit-base-patch16](https:
//huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ChineseCLIPVisionConfig, ChineseCLIPVisionModel
# Initializing a ChineseCLIPVisionConfig with OFA-Sys/chinese-clip-vit-base-patch16 style configuration
configuration = ChineseCLIPVisionConfig()
# Initializing a ChineseCLIPVisionModel (with random weights) from the OFA-Sys/chinese-clip-vit-base-patch16 style configuration
model = ChineseCLIPVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
ChineseCLIPImageProcessor
class transformers.ChineseCLIPImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BICUBIC: 3>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess
method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the
preprocess method.
crop_size (Dict[str, int] optional, defaults to 224) —
Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess
method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in
the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess
method.
do_normalize —
Whether to normalize the image. Can be overridden by do_normalize in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Image standard deviation.
do_convert_rgb (bool, optional, defaults to True) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a Chinese-CLIP image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: int = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_convert_rgb: bool = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the center crop. Only has an effect if do_center_crop is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to use for normalization. Only has an effect if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to use for normalization. Only has an effect if do_normalize is set to
True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: defaults to the channel dimension format of the input image.
Preprocess an image or batch of images.
ChineseCLIPFeatureExtractor
class transformers.ChineseCLIPFeatureExtractor
<
source
>
(
*args
**kwargs
)
ChineseCLIPProcessor
class transformers.ChineseCLIPProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (ChineseCLIPImageProcessor) —
The image processor is a required input.
tokenizer (BertTokenizerFast) —
The tokenizer is a required input.
Constructs a Chinese-CLIP processor which wraps a Chinese-CLIP image processor and a Chinese-CLIP tokenizer into a
single processor.
ChineseCLIPProcessor offers all the functionalities of ChineseCLIPImageProcessor and BertTokenizerFast.
See the __call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to
the docstring of this method for more information.
ChineseCLIPModel
class transformers.ChineseCLIPModel
<
source
>
(
config: ChineseCLIPConfig
)
Parameters
config (ChineseCLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
return_loss: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput or tuple(torch.FloatTensor)
A transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.chinese_clip.configuration_chinese_clip.ChineseCLIPConfig'>) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text
similarity scores.
logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image
similarity scores.
text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of
ChineseCLIPTextModel.
image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of
ChineseCLIPVisionModel.
text_model_output(BaseModelOutputWithPoolingAndCrossAttentions):
The output of the ChineseCLIPTextModel.
vision_model_output(BaseModelOutputWithPoolingAndCrossAttentions):
The output of the ChineseCLIPVisionModel.
The ChineseCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = AutoProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_features (torch.FloatTensor of shape (batch_size, output_dim)
The text embeddings obtained by
applying the projection layer to the final [CLS] hidden state of Text-Transformer.
The ChineseCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
tokenizer = AutoTokenizer.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
inputs = tokenizer(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"], padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
image_features (torch.FloatTensor of shape (batch_size, output_dim)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
image_features (torch.FloatTensor of shape (batch_size, output_dim)
The image embeddings obtained by
applying the projection layer to the final [CLS] hidden state of Vision-Transformer.
The ChineseCLIPModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = AutoProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True)
ChineseCLIPTextModel
class transformers.ChineseCLIPTextModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (ChineseCLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The text model from CHINESE_CLIP without any head or projection on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.
return_loss (bool, optional) —
Whether or not to return the contrastive loss.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ChineseCLIPConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The ChineseCLIPTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ChineseCLIPTextModel
import torch
tokenizer = AutoTokenizer.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
model = ChineseCLIPTextModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ChineseCLIPVisionModel
class transformers.ChineseCLIPVisionModel
<
source
>
(
config: ChineseCLIPVisionConfig
)
Parameters
config (ChineseCLIPConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The vision model from CHINESE_CLIP without any head or projection on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using
AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.chinese_clip.configuration_chinese_clip.ChineseCLIPVisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ChineseCLIPVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import CLIPProcessor, ChineseCLIPVisionModel
model = ChineseCLIPVisionModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
processor = CLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled CLS states
←BridgeTower
CLIP→
Chinese-CLIP
Overview
Usage
ChineseCLIPConfig
ChineseCLIPTextConfig
ChineseCLIPVisionConfig
ChineseCLIPImageProcessor
ChineseCLIPFeatureExtractor
ChineseCLIPProcessor
ChineseCLIPModel
ChineseCLIPTextModel
ChineseCLIPVisionModel
|
Bark
Overview
Bark is a transformer-based text-to-speech model proposed by Suno AI in suno-ai/bark.
Bark is made of 4 main models:
BarkSemanticModel (also referred to as the ‘text’ model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
BarkCoarseModel (also referred to as the ‘coarse acoustics’ model): a causal autoregressive transformer, that takes as input the results of the BarkSemanticModel model. It aims at predicting the first two audio codebooks necessary for EnCodec.
BarkFineModel (the ‘fine acoustics’ model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings.
having predicted all the codebook channels from the EncodecModel, Bark uses it to decode the output audio array.
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice.
Tips:
Suno offers a library of voice presets in a number of languages here.
These presets are also uploaded in the hub here or here.
Copied
from transformers import AutoProcessor, BarkModel
processor = AutoProcessor.from_pretrained("suno/bark")
model = BarkModel.from_pretrained("suno/bark")
voice_preset = "v2/en_speaker_6"
inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects.
Copied
# Multilingual speech - simplified Chinese
inputs = processor("惊人的!我会说中文")
# Multilingual speech - French - let's use a voice_preset as well
inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
# Bark can also generate music. You can help it out by adding music notes around your lyrics.
inputs = processor("♪ Hello, my dog is cute ♪")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
The model can also produce nonverbal communications like laughing, sighing and crying.
Copied
# Adding non-speech cues to the input text
inputs = processor("Hello uh ... [clears throat], my dog is cute [laughter]")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
To save the audio, simply take the sample rate from the model config and some scipy utility:
Copied
from scipy.io.wavfile import write as write_wav
# save audio to disk, but first take the sample rate from the model config
sample_rate = model.generation_config.sample_rate
write_wav("bark_generation.wav", sample_rate, audio_array)
This model was contributed by Yoach Lacombe (ylacombe) and Sanchit Gandhi (sanchit-gandhi).
The original code can be found here.
BarkConfig
class transformers.BarkConfig
<
source
>
(
semantic_config: typing.Dict = None
coarse_acoustics_config: typing.Dict = None
fine_acoustics_config: typing.Dict = None
codec_config: typing.Dict = None
initializer_range = 0.02
**kwargs
)
Parameters
semantic_config (BarkSemanticConfig, optional) —
Configuration of the underlying semantic sub-model.
coarse_acoustics_config (BarkCoarseConfig, optional) —
Configuration of the underlying coarse acoustics sub-model.
fine_acoustics_config (BarkFineConfig, optional) —
Configuration of the underlying fine acoustics sub-model.
codec_config (AutoConfig, optional) —
Configuration of the underlying codec sub-model.
Example —
This is the configuration class to store the configuration of a BarkModel. It is used to instantiate a Bark
model according to the specified sub-models configurations, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Bark
suno/bark architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
from_sub_model_configs
<
source
>
(
semantic_config: BarkSemanticConfig
coarse_acoustics_config: BarkCoarseConfig
fine_acoustics_config: BarkFineConfig
codec_config: AutoConfig
**kwargs
)
→
BarkConfig
Returns
BarkConfig
An instance of a configuration object
Instantiate a BarkConfig (or a derived class) from bark sub-models configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
BarkProcessor
class transformers.BarkProcessor
<
source
>
(
tokenizer
speaker_embeddings = None
)
Parameters
tokenizer (PreTrainedTokenizer) —
An instance of PreTrainedTokenizer.
speaker_embeddings (Dict[Dict[str]], optional, defaults to None) —
Optional nested speaker embeddings dictionary. The first level contains voice preset names (e.g
"en_speaker_4"). The second level contains "semantic_prompt", "coarse_prompt" and "fine_prompt"
embeddings. The values correspond to the path of the corresponding np.ndarray. See
here for
a list of voice_preset_names.
Constructs a Bark processor which wraps a text tokenizer and optional Bark voice presets into a single processor.
__call__
<
source
>
(
text = None
voice_preset = None
return_tensors = 'pt'
max_length = 256
add_special_tokens = False
return_attention_mask = True
return_token_type_ids = False
**kwargs
)
→
Tuple(BatchEncoding, BatchFeature)
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
voice_preset (str, Dict[np.ndarray]) —
The voice preset, i.e the speaker embeddings. It can either be a valid voice_preset name, e.g
"en_speaker_1", or directly a dictionnary of np.ndarray embeddings for each submodel of Bark. Or
it can be a valid file name of a local .npz single voice preset.
return_tensors (str or TensorType, optional) —
If set, will return tensors of a particular framework. Acceptable values are:
'pt': Return PyTorch torch.Tensor objects.
'np': Return NumPy np.ndarray objects.
Returns
Tuple(BatchEncoding, BatchFeature)
A tuple composed of a BatchEncoding, i.e the output of the
tokenizer and a BatchFeature, i.e the voice preset with the right tensors type.
Main method to prepare for the model one or several sequences(s). This method forwards the text and kwargs
arguments to the AutoTokenizer’s __call__() to encode the text. The method also proposes a
voice preset which is a dictionary of arrays that conditions Bark’s output. kwargs arguments are forwarded
to the tokenizer and to cached_file method if voice_preset is a valid filename.
from_pretrained
<
source
>
(
pretrained_processor_name_or_path
speaker_embeddings_dict_path = 'speaker_embeddings_path.json'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained BarkProcessor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a processor saved using the save_pretrained()
method, e.g., ./my_model_directory/.
speaker_embeddings_dict_path (str, optional, defaults to "speaker_embeddings_path.json") —
The name of the .json file containing the speaker_embeddings dictionnary located in
pretrained_model_name_or_path. If None, no speaker_embeddings is loaded.
**kwargs —
Additional keyword arguments passed along to both
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a Bark processor associated with a pretrained model.
save_pretrained
<
source
>
(
save_directory
speaker_embeddings_dict_path = 'speaker_embeddings_path.json'
speaker_embeddings_directory = 'speaker_embeddings'
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the tokenizer files and the speaker embeddings will be saved (directory will be created
if it does not exist).
speaker_embeddings_dict_path (str, optional, defaults to "speaker_embeddings_path.json") —
The name of the .json file that will contains the speaker_embeddings nested path dictionnary, if it
exists, and that will be located in pretrained_model_name_or_path/speaker_embeddings_directory.
speaker_embeddings_directory (str, optional, defaults to "speaker_embeddings/") —
The name of the folder in which the speaker_embeddings arrays will be saved.
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (tokenizer…) in the specified directory so that it can be reloaded
using the from_pretrained() method.
BarkModel
class transformers.BarkModel
<
source
>
(
config
)
Parameters
config (BarkConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The full Bark model, a text-to-speech model composed of 4 sub-models:
BarkSemanticModel (also referred to as the ‘text’ model): a causal auto-regressive transformer model that
takes
as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
BarkCoarseModel (also refered to as the ‘coarse acoustics’ model), also a causal autoregressive transformer,
that takes into input the results of the last model. It aims at regressing the first two audio codebooks necessary
to encodec.
BarkFineModel (the ‘fine acoustics’ model), this time a non-causal autoencoder transformer, which iteratively
predicts the last codebooks based on the sum of the previous codebooks embeddings.
having predicted all the codebook channels from the EncodecModel, Bark uses it to decode the output audio
array.
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the
output sound according to specific predefined voice.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
generate
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
history_prompt: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None
**kwargs
)
→
torch.LongTensor
Parameters
input_ids (Optional[torch.Tensor] of shape (batch_size, seq_len), optional) —
Input ids. Will be truncated up to 256 tokens. Note that the output audios will be as long as the
longest generation among the batch.
history_prompt (Optional[Dict[str,torch.Tensor]], optional) —
Optional Bark speaker prompt. Note that for now, this model takes only one speaker prompt per batch.
Returns
torch.LongTensor
Output generated audio.
Generates audio from an input prompt and an additional optional Bark speaker prompt.
kwargs (optional): Remaining dictionary of keyword arguments. Keyword arguments are of two types:
Without a prefix, they will be entered as **kwargs for the generate method of each sub-model.
With a semantic_, coarse_, fine_ prefix, they will be input for the generate method of the
semantic, coarse and fine respectively. It has the priority over the keywords without a prefix.
This means you can, for example, specify a generation strategy for all sub-models except one.
Example:
Copied
from transformers import AutoProcessor, BarkModel
processor = AutoProcessor.from_pretrained("ylacombe/bark-small")
model = BarkModel.from_pretrained("ylacombe/bark-small")
# To add a voice preset, you can pass `voice_preset` to `BarkProcessor.__call__(...)`
voice_preset = "v2/en_speaker_6"
inputs = processor("Hello, my dog is cute, I need him in my life", voice_preset=voice_preset)
audio_array = model.generate(**inputs, semantic_max_new_tokens=100)
audio_array = audio_array.cpu().numpy().squeeze()
BarkSemanticModel
class transformers.BarkSemanticModel
<
source
>
(
config
)
Parameters
config (BarkSemanticConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Bark semantic (or text) model. It shares the same architecture as the coarse model.
It is a GPT-2 like autoregressive model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
input_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
input_ids of shape (batch_size, sequence_length).
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation.
Here, due to Bark particularities, if past_key_values is used, input_embeds will be ignored and you
have to use input_ids. If past_key_values is not used and use_cache is set to True, input_embeds
is used in priority instead of input_ids.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The BarkCausalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
BarkCoarseModel
class transformers.BarkCoarseModel
<
source
>
(
config
)
Parameters
config (BarkCoarseConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Bark coarse acoustics model.
It shares the same architecture as the semantic (or text) model. It is a GPT-2 like autoregressive model with a
language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
input_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
input_ids of shape (batch_size, sequence_length).
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation.
Here, due to Bark particularities, if past_key_values is used, input_embeds will be ignored and you
have to use input_ids. If past_key_values is not used and use_cache is set to True, input_embeds
is used in priority instead of input_ids.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The BarkCausalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
BarkFineModel
class transformers.BarkFineModel
<
source
>
(
config
)
Parameters
config (BarkFineConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
Bark fine acoustics model. It is a non-causal GPT-like model with config.n_codes_total embedding layers and
language modeling heads, one for each codebook.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
codebook_idx: int
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
input_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
codebook_idx (int) —
Index of the codebook that will be predicted.
input_ids (torch.LongTensor of shape (batch_size, sequence_length, number_of_codebooks)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it. Initially, indices of the first two codebooks are obtained from the coarse sub-model. The rest is
predicted recursively by attending the previously predicted channels. The model predicts on windows of
length 1024.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — NOT IMPLEMENTED YET.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. If
past_key_values is used, optionally only the last input_embeds have to be input (see
past_key_values). This is useful if you want more control over how to convert input_ids indices into
associated vectors than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The BarkFineModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
BarkCausalModel
class transformers.BarkCausalModel
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.LongTensor] = None
input_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are input IDs?
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
input_ids of shape (batch_size, sequence_length).
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
input_embeds (torch.FloatTensor of shape (batch_size, input_sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation.
Here, due to Bark particularities, if past_key_values is used, input_embeds will be ignored and you
have to use input_ids. If past_key_values is not used and use_cache is set to True, input_embeds
is used in priority instead of input_ids.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
The BarkCausalModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
BarkCoarseConfig
class transformers.BarkCoarseConfig
<
source
>
(
block_size = 1024
input_vocab_size = 10048
output_vocab_size = 10048
num_layers = 12
num_heads = 12
hidden_size = 768
dropout = 0.0
bias = True
initializer_range = 0.02
use_cache = True
**kwargs
)
Parameters
block_size (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (int, optional, defaults to 10_048) —
Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BarkCoarseModel. Defaults to 10_048 but should be carefully thought with
regards to the chosen sub-model.
output_vocab_size (int, optional, defaults to 10_048) —
Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented
by the: output_ids when passing forward a BarkCoarseModel. Defaults to 10_048 but should be carefully thought
with regards to the chosen sub-model.
num_layers (int, optional, defaults to 12) —
Number of hidden layers in the given sub-model.
num_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the architecture.
dropout (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (bool, optional, defaults to True) —
Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a BarkCoarseModel. It is used to instantiate the model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Bark suno/bark
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BarkCoarseConfig, BarkCoarseModel
# Initializing a Bark sub-module style configuration
configuration = BarkCoarseConfig()
# Initializing a model (with random weights) from the suno/bark style configuration
model = BarkCoarseModel(configuration)
# Accessing the model configuration
configuration = model.config
BarkFineConfig
class transformers.BarkFineConfig
<
source
>
(
tie_word_embeddings = True
n_codes_total = 8
n_codes_given = 1
**kwargs
)
Parameters
block_size (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (int, optional, defaults to 10_048) —
Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BarkFineModel. Defaults to 10_048 but should be carefully thought with
regards to the chosen sub-model.
output_vocab_size (int, optional, defaults to 10_048) —
Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented
by the: output_ids when passing forward a BarkFineModel. Defaults to 10_048 but should be carefully thought
with regards to the chosen sub-model.
num_layers (int, optional, defaults to 12) —
Number of hidden layers in the given sub-model.
num_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the architecture.
dropout (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (bool, optional, defaults to True) —
Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
n_codes_total (int, optional, defaults to 8) —
The total number of audio codebooks predicted. Used in the fine acoustics sub-model.
n_codes_given (int, optional, defaults to 1) —
The number of audio codebooks predicted in the coarse acoustics sub-model. Used in the acoustics
sub-models.
This is the configuration class to store the configuration of a BarkFineModel. It is used to instantiate the model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Bark suno/bark
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BarkFineConfig, BarkFineModel
# Initializing a Bark sub-module style configuration
configuration = BarkFineConfig()
# Initializing a model (with random weights) from the suno/bark style configuration
model = BarkFineModel(configuration)
# Accessing the model configuration
configuration = model.config
BarkSemanticConfig
class transformers.BarkSemanticConfig
<
source
>
(
block_size = 1024
input_vocab_size = 10048
output_vocab_size = 10048
num_layers = 12
num_heads = 12
hidden_size = 768
dropout = 0.0
bias = True
initializer_range = 0.02
use_cache = True
**kwargs
)
Parameters
block_size (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
input_vocab_size (int, optional, defaults to 10_048) —
Vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BarkSemanticModel. Defaults to 10_048 but should be carefully thought with
regards to the chosen sub-model.
output_vocab_size (int, optional, defaults to 10_048) —
Output vocabulary size of a Bark sub-model. Defines the number of different tokens that can be represented
by the: output_ids when passing forward a BarkSemanticModel. Defaults to 10_048 but should be carefully thought
with regards to the chosen sub-model.
num_layers (int, optional, defaults to 12) —
Number of hidden layers in the given sub-model.
num_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer architecture.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the architecture.
dropout (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
bias (bool, optional, defaults to True) —
Whether or not to use bias in the linear layers and layer norm layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a BarkSemanticModel. It is used to instantiate the model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the Bark suno/bark
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BarkSemanticConfig, BarkSemanticModel
# Initializing a Bark sub-module style configuration
configuration = BarkSemanticConfig()
# Initializing a model (with random weights) from the suno/bark style configuration
model = BarkSemanticModel(configuration)
# Accessing the model configuration
configuration = model.config
←Audio Spectrogram Transformer
CLAP→
Bark
Overview
Tips:
BarkConfig
BarkProcessor
BarkModel
BarkSemanticModel
BarkCoarseModel
BarkFineModel
BarkCausalModel
BarkCoarseConfig
BarkFineConfig
BarkSemanticConfig
|
mT5
Overview
The mT5 model was presented in mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya
Siddhant, Aditya Barua, Colin Raffel.
The abstract from the paper is the following:
The recent “Text-to-Text Transfer Transformer” (T5) leveraged a unified text-to-text format and scale to attain
state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a
multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail
the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual
benchmarks. We also describe a simple technique to prevent “accidental translation” in the zero-shot setting, where a
generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model
checkpoints used in this work are publicly available.
Note: mT5 was only pre-trained on mC4 excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model.
Since mT5 was pre-trained unsupervisedly, there’s no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants:
google/mt5-small
google/mt5-base
google/mt5-large
google/mt5-xl
google/mt5-xxl.
This model was contributed by patrickvonplaten. The original code can be
found here.
Documentation resources
Translation task guide
Summarization task guide
MT5Config
class transformers.MT5Config
<
source
>
(
vocab_size = 250112
d_model = 512
d_kv = 64
d_ff = 1024
num_layers = 8
num_decoder_layers = None
num_heads = 6
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
dropout_rate = 0.1
layer_norm_epsilon = 1e-06
initializer_factor = 1.0
feed_forward_proj = 'gated-gelu'
is_encoder_decoder = True
use_cache = True
tokenizer_class = 'T5Tokenizer'
tie_word_embeddings = False
pad_token_id = 0
eos_token_id = 1
decoder_start_token_id = 0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250112) —
Vocabulary size of the T5 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling T5Model or TFT5Model.
d_model (int, optional, defaults to 512) —
Size of the encoder layers and the pooler layer.
d_kv (int, optional, defaults to 64) —
Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads.
d_ff (int, optional, defaults to 1024) —
Size of the intermediate feed forward layer in each T5Block.
num_layers (int, optional, defaults to 8) —
Number of hidden layers in the Transformer encoder.
num_decoder_layers (int, optional) —
Number of hidden layers in the Transformer decoder. Will use the same value as num_layers if not set.
num_heads (int, optional, defaults to 6) —
Number of attention heads for each attention layer in the Transformer encoder.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (float, optional, defaults to 0.1) —
The ratio for all dropout layers.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
initializer_factor (float, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
feed_forward_proj (string, optional, defaults to "gated-gelu") —
Type of feed forward layer to be used. Should be one of "relu" or "gated-gelu".
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a MT5Model or a TFMT5Model. It is used to
instantiate a mT5 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the mT5
google/mt5-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
MT5Tokenizer
class transformers.T5Tokenizer
<
source
>
(
vocab_file
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
extra_ids = 100
additional_special_tokens = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
legacy = True
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
extra_ids (int, optional, defaults to 100) —
Add a number of extra ids added to the vocabulary for use as sentinels. These tokens are
accessible as “id{%d}>” where ”{%d}” is a number between 0 and extra_ids-1. These tokens can be
retrieved by calling get_sentinel_tokens method and token ids can be by calling get_sentinel_token_ids
method
additional_special_tokens (List[str], optional):
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
legacy (bool, optional, defaults to True) —
Whether or not the legacy behaviour of the tokenizer should be used. Legacy is before the merge of #24622
which includes fixes to properly handle tokens that appear after special tokens. A simple example:
legacy=True:
Construct a T5 tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
single sequence: X </s>
pair of sequences: A </s> B </s>
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
See T5Tokenizer for all details.
MT5TokenizerFast
class transformers.T5TokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
extra_ids = 100
additional_special_tokens = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
extra_ids (int, optional, defaults to 100) —
Add a number of extra ids added to the vocabulary for use as sentinels. These tokens are accessible as
“id{%d}>” where ”{%d}” is a number between 0 and extra_ids-1. These tokens can be retrieved by
calling get_sentinel_tokens method and token ids can be by calling get_sentinel_token_ids method
additional_special_tokens (List[str], optional) —
Additional special tokens used by the tokenizer.
Construct a “fast” T5 tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A sequence has the following format:
single sequence: X </s>
pair of sequences: A </s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
use of token type ids, therefore a list of zeros is returned.
See T5TokenizerFast for all details.
MT5Model
class transformers.MT5Model
<
source
>
(
config: MT5Config
)
Parameters
config (MT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MT5 Model transformer outputting raw hidden-states without any specific head on top.
The MT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Examples:
Copied
from transformers import MT5Model, AutoTokenizer
model = MT5Model.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, return_tensors="pt")
labels = tokenizer(text_target=summary, return_tensors="pt")
outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"])
hidden_states = outputs.last_hidden_state
deparallelize
<
source
>
(
)
Moves the model to cpu from a model parallel state.
Example:
Copied
# On a 4 GPU machine with mt5-xl:
model = MT5ForConditionalGeneration.from_pretrained("Mt5-xl")
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map) # Splits the model across several devices
model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. MT5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a MT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at MT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MT5Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MT5Model
tokenizer = AutoTokenizer.from_pretrained("mt5-small")
model = MT5Model.from_pretrained("mt5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for MT5Model.
# This is not needed for torch's MT5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
parallelize
<
source
>
(
device_map = None
)
Parameters
device_map (Dict[int, list], optional, defaults to None) —
A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always
automatically mapped to the first device (for esoteric reasons). That means that the first device should
have fewer attention modules mapped to it than other devices. For reference, the mt5 models have the
following number of attention modules:
mt5-small: 6
mt5-base: 12
mt5-large: 24
mt5-xl: 24
mt5-xxl: 24
This is an experimental feature and is a subject to change at a moment’s notice.
Uses a device map to distribute attention modules of the model across several devices. If no device map is given,
it will evenly distribute blocks across all devices.
Example:
Copied
# Here is an example of a device map on a machine with 4 GPUs using mt5-xl, which has a total of 24 attention modules:
model = MT5ForConditionalGeneration.from_pretrained("mt5-xl")
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map)
MT5ForConditionalGeneration
class transformers.MT5ForConditionalGeneration
<
source
>
(
config: MT5Config
)
Parameters
config (MT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MT5 Model with a language modeling head on top.
The MT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Examples:
Copied
from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, text_target=summary, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
deparallelize
<
source
>
(
)
Moves the model to cpu from a model parallel state.
Example:
Copied
# On a 4 GPU machine with mt5-xl:
model = MT5ForConditionalGeneration.from_pretrained("Mt5-xl")
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map) # Splits the model across several devices
model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. MT5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a MT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at MT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MT5ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, MT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("mt5-small")
model = MT5ForConditionalGeneration.from_pretrained("mt5-small")
# training
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
# inference
input_ids = tokenizer(
... "summarize: studies have shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# studies have shown that owning a dog is good for you.
parallelize
<
source
>
(
device_map = None
)
Parameters
device_map (Dict[int, list], optional, defaults to None) —
A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always
automatically mapped to the first device (for esoteric reasons). That means that the first device should
have fewer attention modules mapped to it than other devices. For reference, the mt5 models have the
following number of attention modules:
mt5-small: 6
mt5-base: 12
mt5-large: 24
mt5-xl: 24
mt5-xxl: 24
This is an experimental feature and is a subject to change at a moment’s notice.
Uses a device map to distribute attention modules of the model across several devices. If no device map is given,
it will evenly distribute blocks across all devices.
Example:
Copied
# Here is an example of a device map on a machine with 4 GPUs using mt5-xl, which has a total of 24 attention modules:
model = MT5ForConditionalGeneration.from_pretrained("mt5-xl")
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map)
MT5EncoderModel
class transformers.MT5EncoderModel
<
source
>
(
config: MT5Config
)
Parameters
config (MT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MT5 Model transformer outputting encoder’s raw hidden-states without any specific head on top.
The MT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Examples:
Copied
from transformers import MT5EncoderModel, AutoTokenizer
model = MT5EncoderModel.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
input_ids = tokenizer(article, return_tensors="pt").input_ids
outputs = model(input_ids)
hidden_state = outputs.last_hidden_state
deparallelize
<
source
>
(
)
Moves the model to cpu from a model parallel state.
Example:
Copied
# On a 4 GPU machine with mt5-xl:
model = MT5ForConditionalGeneration.from_pretrained("Mt5-xl")
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map) # Splits the model across several devices
model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache()
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. MT5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
To know more on how to prepare input_ids for pretraining take a look a MT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MT5EncoderModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, MT5EncoderModel
tokenizer = AutoTokenizer.from_pretrained("mt5-small")
model = MT5EncoderModel.from_pretrained("mt5-small")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
outputs = model(input_ids=input_ids)
last_hidden_states = outputs.last_hidden_state
parallelize
<
source
>
(
device_map = None
)
Parameters
device_map (Dict[int, list], optional, defaults to None) —
A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always
automatically mapped to the first device (for esoteric reasons). That means that the first device should
have fewer attention modules mapped to it than other devices. For reference, the mt5 models have the
following number of attention modules:
mt5-small: 6
mt5-base: 12
mt5-large: 24
mt5-xl: 24
mt5-xxl: 24
This is an experimental feature and is a subject to change at a moment’s notice.
Uses a device map to distribute attention modules of the model across several devices. If no device map is given,
it will evenly distribute blocks across all devices.
Example:
Copied
# Here is an example of a device map on a machine with 4 GPUs using mt5-xl, which has a total of 24 attention modules:
model = MT5ForConditionalGeneration.from_pretrained("mt5-xl")
device_map = {
0: [0, 1, 2],
1: [3, 4, 5, 6, 7, 8, 9],
2: [10, 11, 12, 13, 14, 15, 16],
3: [17, 18, 19, 20, 21, 22, 23],
}
model.parallelize(device_map)
MT5ForQuestionAnswering
class transformers.MT5ForQuestionAnswering
<
source
>
(
config: MT5Config
)
Parameters
config (MT5Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MT5 Model with a span classification head on top for extractive question-answering tasks like SQuAD (linear layers
on top of the hidden-states output to compute span start logits and span end logits).
The MT5 model was proposed in Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan
Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It’s an encoder decoder transformer pre-trained in a
text-to-text denoising generative setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. MT5 is a model with relative position embeddings so you
should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a MT5 Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
MT5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at MT5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The MT5ForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
TFMT5Model
class transformers.TFMT5Model
<
source
>
(
*args
**kwargs
)
This class overrides TFT5Model. Please check the superclass for the appropriate documentation alongside usage
examples.
Examples:
Copied
from transformers import TFMT5Model, AutoTokenizer
model = TFMT5Model.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, return_tensors="tf")
labels = tokenizer(text_target=summary, return_tensors="tf")
outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"])
hidden_states = outputs.last_hidden_state
TFMT5ForConditionalGeneration
class transformers.TFMT5ForConditionalGeneration
<
source
>
(
*args
**kwargs
)
This class overrides TFT5ForConditionalGeneration. Please check the superclass for the appropriate
documentation alongside usage examples.
Examples:
Copied
from transformers import TFMT5ForConditionalGeneration, AutoTokenizer
model = TFMT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, text_target=summary, return_tensors="tf")
outputs = model(**inputs)
loss = outputs.loss
TFMT5EncoderModel
class transformers.TFMT5EncoderModel
<
source
>
(
*args
**kwargs
)
This class overrides TFT5EncoderModel. Please check the superclass for the appropriate documentation alongside
usage examples.
Examples:
Copied
from transformers import TFMT5EncoderModel, AutoTokenizer
model = TFMT5EncoderModel.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
input_ids = tokenizer(article, return_tensors="tf").input_ids
outputs = model(input_ids)
hidden_state = outputs.last_hidden_state
FlaxMT5Model
class transformers.FlaxMT5Model
<
source
>
(
config: T5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
This class overrides FlaxT5Model. Please check the superclass for the appropriate documentation alongside usage
examples.
Examples:
Copied
from transformers import FlaxMT5Model, AutoTokenizer
model = FlaxMT5Model.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, return_tensors="np")
decoder_input_ids = tokenizer(text_target=summary, return_tensors="np").input_ids
outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=decoder_input_ids)
hidden_states = outputs.last_hidden_state
FlaxMT5ForConditionalGeneration
class transformers.FlaxMT5ForConditionalGeneration
<
source
>
(
config: T5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
This class overrides FlaxT5ForConditionalGeneration. Please check the superclass for the appropriate
documentation alongside usage examples.
Examples:
Copied
from transformers import FlaxMT5ForConditionalGeneration, AutoTokenizer
model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, return_tensors="np")
decoder_input_ids = tokenizer(text_target=summary, return_tensors="np").input_ids
outputs = model(**inputs, decoder_input_ids=decoder_input_ids)
logits = outputs.logits
FlaxMT5EncoderModel
class transformers.FlaxMT5EncoderModel
<
source
>
(
config: T5Config
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
gradient_checkpointing: bool = False
**kwargs
)
This class overrides FlaxT5EncoderModel. Please check the superclass for the appropriate documentation
alongside usage examples.
Examples:
Copied
from transformers import FlaxT5EncoderModel, AutoTokenizer
model = FlaxT5EncoderModel.from_pretrained("google/mt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small")
article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien."
summary = "Weiter Verhandlung in Syrien."
inputs = tokenizer(article, return_tensors="np")
decoder_input_ids = tokenizer(text_target=summary, return_tensors="np").input_ids
outputs = model(input_ids=inputs["input_ids"])
hidden_states = outputs.last_hidden_state
←MRA
MVP→
mT5
Overview
Documentation resources
MT5Config
MT5Tokenizer
MT5TokenizerFast
MT5Model
MT5ForConditionalGeneration
MT5EncoderModel
MT5ForQuestionAnswering
TFMT5Model
TFMT5ForConditionalGeneration
TFMT5EncoderModel
FlaxMT5Model
FlaxMT5ForConditionalGeneration
FlaxMT5EncoderModel
|
RWKV
Overview
The RWKV model was proposed in this repo
It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below).
This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training).
This model was contributed by sgugger.
The original code can be found here.
Example of use as an RNN:
Copied
import torch
from transformers import AutoTokenizer, RwkvConfig, RwkvModel
model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile")
tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile")
inputs = tokenizer("This is an example.", return_tensors="pt")
# Feed everything to the model
outputs = model(inputs["input_ids"])
output_whole = outputs.last_hidden_state
outputs = model(inputs["input_ids"][:, :2])
output_one = outputs.last_hidden_state
# Using the state computed on the first inputs, we will get the same output
outputs = model(inputs["input_ids"][:, 2:], state=outputs.state)
output_two = outputs.last_hidden_state
torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5)
RwkvConfig
class transformers.RwkvConfig
<
source
>
(
vocab_size = 50277
context_length = 1024
hidden_size = 4096
num_hidden_layers = 32
attention_hidden_size = None
intermediate_size = None
layer_norm_epsilon = 1e-05
bos_token_id = 0
eos_token_id = 0
rescale_every = 6
tie_word_embeddings = False
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50277) —
Vocabulary size of the RWKV model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling RwkvModel.
context_length (int, optional, defaults to 1024) —
The maximum sequence length that this model can be be used with in a single forward (using it in RNN mode
lets use any sequence length).
hidden_size (int, optional, defaults to 4096) —
Dimensionality of the embeddings and hidden states.
num_hidden_layers (int, optional, defaults to 32) —
Number of hidden layers in the model.
attention_hidden_size (int, optional) —
Dimensionality of the attention hidden states. Will default to hidden_size if unset.
intermediate_size (int, optional) —
Dimensionality of the inner feed-forward layers. Will default to 4 times hidden_size if unset.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
bos_token_id (int, optional, defaults to 0) —
The id of the beginning of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer
as GPTNeoX.
eos_token_id (int, optional, defaults to 0) —
The id of the end of sentence token in the vocabulary. Defaults to 0 as RWKV uses the same tokenizer as
GPTNeoX.
rescale_every (int, optional, default to 6) —
At inference, the hidden states (and weights of the correponding output layers) are divided by 2 every
rescale_every layer. If set to 0 or a negative number, no rescale is done.
tie_word_embeddings (bool, optional, defaults to False) —
Whether or not to tie the word embeddings with the input token embeddings.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last state.
This is the configuration class to store the configuration of a RwkvModel. It is used to instantiate a RWKV
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the RWVK-4
RWKV/rwkv-4-169m-pile architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import RwkvConfig, RwkvModel
# Initializing a Rwkv configuration
configuration = RwkvConfig()
# Initializing a model (with random weights) from the configuration
model = RwkvModel(configuration)
# Accessing the model configuration
configuration = model.config
RwkvModel
class transformers.RwkvModel
<
source
>
(
config
)
Parameters
config (RwkvConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RWKV Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
state: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.rwkv.modeling_rwkv.RwkvOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
This is currently not used by RwkvModel, but will be supported in the future.
What are attention masks?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
state (tuple of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers), optional) —
If passed along, the model uses the previous state in all the blocks (which will give the output for the
input_ids provided as if the model add state_input_ids + input_ids as context).
use_cache (bool, optional) —
If set to True, the last state is returned and can be used to quickly generate the next logits.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.rwkv.modeling_rwkv.RwkvOutput or tuple(torch.FloatTensor)
A transformers.models.rwkv.modeling_rwkv.RwkvOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RwkvConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
state (list of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers)) — The state of the model at the last time step. Can be used in a forward method with the next input_ids to
avoid providing the old input_ids.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RwkvModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RwkvModel
import torch
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvModel.from_pretrained("RWKV/rwkv-4-169m-pile")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
RwkvLMHeadModel
class transformers.RwkvForCausalLM
<
source
>
(
config
)
Parameters
config (RwkvConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The RWKV Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
state: typing.Optional[typing.List[torch.FloatTensor]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.rwkv.modeling_rwkv.RwkvCausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past_key_values is None else
past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input
sequence tokens in the vocabulary.
If past_key_values is used, only input_ids that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.LongTensor of shape (batch_size, input_ids_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
This is currently not used by RwkvModel, but will be supported in the future.
What are attention masks?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
state (tuple of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers), optional) —
If passed along, the model uses the previous state in all the blocks (which will give the output for the
input_ids provided as if the model add state_input_ids + input_ids as context).
use_cache (bool, optional) —
If set to True, the last state is returned and can be used to quickly generate the next logits.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.models.rwkv.modeling_rwkv.RwkvCausalLMOutput or tuple(torch.FloatTensor)
A transformers.models.rwkv.modeling_rwkv.RwkvCausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RwkvConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
state (list of five torch.FloatTensor of shape (batch_size, hidden_size, num_hidden_layers)) — The state of the model at the last time step. Can be used in a forward method with the next input_ids to
avoid providing the old input_ids.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RwkvForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, RwkvForCausalLM
tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-169m-pile")
model = RwkvForCausalLM.from_pretrained("RWKV/rwkv-4-169m-pile")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
Rwkv attention and the recurrent formulas
In a traditional auto-regressive Transformer, attention is written as
O=softmax(QKT/d)VO = \hbox{softmax}(QK^{T} / \sqrt{d}) VO=softmax(QKT/d)V
with QQQ, KKK and VVV are matrices of shape seq_len x hidden_size named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we’re only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product QKTQK^{T}QKT then has shape seq_len x seq_len and we can take the maxtrix product with VVV to get the output OOO of the same shape as the others.
Replacing the softmax by its value gives:
Oi=∑j=1ieQiKjT/dVj∑j=1ieQiKjT/dO_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}Oi=∑j=1ieQiKjT/d∑j=1ieQiKjT/dVj
Note that the entries in QKTQK^{T}QKT corresponding to j>ij > ij>i are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones).
In comparison, the RWKV attention is given by
Oi=σ(Ri)∑j=1ieWi−j+KjVj∑j=1ieWi−j+KjO_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}Oi=σ(Ri)∑j=1ieWi−j+Kj∑j=1ieWi−j+KjVj
where RRR is a new matrix called receptance by the author, KKK and VVV are still the key and value (σ\sigmaσ here is the sigmoid function). WWW is a new vector that represents the position of the token and is given by
W0=u and Wk=(k−1)w for k≥1W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1W0=u and Wk=(k−1)w for k≥1
with uuu and www learnable parameters called in the code time_first and time_decay respectively. The numerator and denominator can both be expressed recursively. Naming them NiN_{i}Ni and DiD_{i}Di we have:
Ni=eu+KiVi+N^i where N^i=eKi−1Vi−1+ew+Ki−2Vi−2⋯+e(i−2)w+K1V1N_{i} = e^{u + K_{i}} V_{i} + \hat{N}_{i} \hbox{ where } \hat{N}_{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}Ni=eu+KiVi+N^i where N^i=eKi−1Vi−1+ew+Ki−2Vi−2⋯+e(i−2)w+K1V1
so N^i\hat{N}_{i}N^i (called numerator_state in the code) satistfies
N^0=0 and N^j+1=eKjVj+ewN^j\hat{N}_{0} = 0 \hbox{ and } \hat{N}_{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}N^0=0 and N^j+1=eKjVj+ewN^j
and
Di=eu+Ki+D^i where D^i=eKi−1+ew+Ki−2⋯+e(i−2)w+K1D_{i} = e^{u + K_{i}} + \hat{D}_{i} \hbox{ where } \hat{D}_{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}Di=eu+Ki+D^i where D^i=eKi−1+ew+Ki−2⋯+e(i−2)w+K1
so D^i\hat{D}_{i}D^i (called denominator_state in the code) satistfies
D^0=0 and D^j+1=eKj+ewD^j\hat{D}_{0} = 0 \hbox{ and } \hat{D}_{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}D^0=0 and D^j+1=eKj+ewD^j
The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don’t want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator:
exi∑j=1nexj=exi−M∑j=1nexj−M\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}∑j=1nexjexi=∑j=1nexj−Mexi−M
with MMM the maximum of all xjx_{j}xj. So here on top of saving the numerator state (N^\hat{N}N^) and the denominator state (D^\hat{D}D^) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use
N~i=e−MiN^i and D~i=e−MiD^i\tilde{N}_{i} = e^{-M_{i}} \hat{N}_{i} \hbox{ and } \tilde{D}_{i} = e^{-M_{i}} \hat{D}_{i}N~i=e−MiN^i and D~i=e−MiD^i
defined by the following recurrent formulas:
N~0=0 and N~j+1=eKj−qVj+ew+Mj−qN~j where q=max(Kj,w+Mj)\tilde{N}_{0} = 0 \hbox{ and } \tilde{N}_{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})N~0=0 and N~j+1=eKj−qVj+ew+Mj−qN~j where q=max(Kj,w+Mj)
and
D~0=0 and D~j+1=eKj−q+ew+Mj−qD~j where q=max(Kj,w+Mj)\tilde{D}_{0} = 0 \hbox{ and } \tilde{D}_{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}_{j} \hbox{ where } q = \max(K_{j}, w + M_{j})D~0=0 and D~j+1=eKj−q+ew+Mj−qD~j where q=max(Kj,w+Mj)
and Mj+1=qM_{j+1} = qMj+1=q. With those, we can then compute
Ni=eu+Ki−qVi+eMiN~i where q=max(u+Ki,Mi)N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})Ni=eu+Ki−qVi+eMiN~i where q=max(u+Ki,Mi)
and
Di=eu+Ki−q+eMiD~i where q=max(u+Ki,Mi)D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}_{i} \hbox{ where } q = \max(u + K_{i}, M_{i})Di=eu+Ki−q+eMiD~i where q=max(u+Ki,Mi)
which finally gives us
Oi=σ(Ri)NiDiO_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}Oi=σ(Ri)DiNi
←RoFormer
Splinter→
RWKV
Overview
RwkvConfig
RwkvModel
RwkvLMHeadModel
Rwkv attention and the recurrent formulas
|
XLM-ProphetNet
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The XLM-ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for “ngram” language modeling instead of
just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual
“wiki100” Wikipedia dump.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
The Authors’ code can be found here.
Tips:
XLM-ProphetNet’s model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE.
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
XLMProphetNetConfig
class transformers.XLMProphetNetConfig
<
source
>
(
activation_dropout: typing.Optional[float] = 0.1
activation_function: typing.Union[str, typing.Callable, NoneType] = 'gelu'
vocab_size: typing.Optional[int] = 30522
hidden_size: typing.Optional[int] = 1024
encoder_ffn_dim: typing.Optional[int] = 4096
num_encoder_layers: typing.Optional[int] = 12
num_encoder_attention_heads: typing.Optional[int] = 16
decoder_ffn_dim: typing.Optional[int] = 4096
num_decoder_layers: typing.Optional[int] = 12
num_decoder_attention_heads: typing.Optional[int] = 16
attention_dropout: typing.Optional[float] = 0.1
dropout: typing.Optional[float] = 0.1
max_position_embeddings: typing.Optional[int] = 512
init_std: typing.Optional[float] = 0.02
is_encoder_decoder: typing.Optional[bool] = True
add_cross_attention: typing.Optional[bool] = True
decoder_start_token_id: typing.Optional[int] = 0
ngram: typing.Optional[int] = 2
num_buckets: typing.Optional[int] = 32
relative_max_distance: typing.Optional[int] = 128
disable_ngram_loss: typing.Optional[bool] = False
eps: typing.Optional[float] = 0.0
use_cache: typing.Optional[bool] = True
pad_token_id: typing.Optional[int] = 0
bos_token_id: typing.Optional[int] = 1
eos_token_id: typing.Optional[int] = 2
**kwargs
)
Parameters
activation_dropout (float, optional, defaults to 0.1) —
The dropout ratio for activations inside the fully connected layer.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the ProphetNET model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling XLMProphetNetModel.
hidden_size (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
num_encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
num_encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the intermediate (often named feed-forward) layer in decoder.
num_decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
num_decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
add_cross_attention (bool, optional, defaults to True) —
Whether cross-attention layers should be added to the model.
is_encoder_decoder (bool, optional, defaults to True) —
Whether this is an encoder/decoder model.
pad_token_id (int, optional, defaults to 1) —
Padding token id.
bos_token_id (int, optional, defaults to 0) —
Beginning of stream token id.
eos_token_id (int, optional, defaults to 2) —
End of stream token id.
ngram (int, optional, defaults to 2) —
Number of future tokens to predict. Set to 1 to be same as traditional Language model to predict next first
token.
num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer. This is for relative position calculation. See the
[T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
relative_max_distance (int, optional, defaults to 128) —
Relative distances greater than this number will be put into the last same bucket. This is for relative
position calculation. See the [T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
disable_ngram_loss (bool, optional, defaults to False) —
Whether be trained predicting only the next first token.
eps (float, optional, defaults to 0.0) —
Controls the epsilon parameter value for label smoothing in the loss calculation. If set to 0, no label
smoothing is performed.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a XLMProphetNetModel. It is used to instantiate a
XLMProphetNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the XLMProphetNet
microsoft/xprophetnet-large-wiki100-cased
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
XLMProphetNetTokenizer
class transformers.XLMProphetNetTokenizer
<
source
>
(
vocab_file
bos_token = '[SEP]'
eos_token = '[SEP]'
sep_token = '[SEP]'
unk_token = '[UNK]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from RobertaTokenizer and XLNetTokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A XLMProphetNet sequence has the following format:
single sequence: X [SEP]
pair of sequences: A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens
)
Converts a sequence of tokens (strings for sub-words) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLMProphetNet
does not make use of token type ids, therefore a list of zeros is returned.
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
XLMProphetNetModel
class transformers.XLMProphetNetModel
<
source
>
(
config: XLMProphetNetConfig
)
Parameters
config (XLMProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare XLMProphetNet Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
XLMProphetNet uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMProphetNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, decoder_sequence_length, hidden_size)) — Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
last_hidden_state_ngram (torch.FloatTensor of shape (batch_size,ngram * decoder_sequence_length, config.vocab_size), optional) — Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, encoder_sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The XLMProphetNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMProphetNetModel
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
model = XLMProphetNetModel.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state # main stream hidden states
last_hidden_states_ngram = outputs.last_hidden_state_ngram # predict hidden states
XLMProphetNetEncoder
class transformers.XLMProphetNetEncoder
<
source
>
(
config: XLMProphetNetConfig
word_embeddings: Embedding = None
)
Parameters
config (XLMProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The standalone encoder part of the XLMProphetNetModel.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
word_embeddings (torch.nn.Embeddings of shape (config.vocab_size, config.hidden_size), optional):
The word embedding parameters. This can be used to initialize XLMProphetNetEncoder with pre-defined word
embeddings instead of randomly initialized word embeddings.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMProphetNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The XLMProphetNetEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMProphetNetEncoder
import torch
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
model = XLMProphetNetEncoder.from_pretrained("patrickvonplaten/prophetnet-large-uncased-standalone")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XLMProphetNetDecoder
class transformers.XLMProphetNetDecoder
<
source
>
(
config: XLMProphetNetConfig
word_embeddings: typing.Optional[torch.nn.modules.sparse.Embedding] = None
)
Parameters
config (XLMProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The standalone decoder part of the XLMProphetNetModel.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
word_embeddings (torch.nn.Embeddings of shape (config.vocab_size, config.hidden_size), optional):
The word embedding parameters. This can be used to initialize XLMProphetNetEncoder with pre-defined word
embeddings instead of randomly initialized word embeddings.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
Returns
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput or tuple(torch.FloatTensor)
A transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMProphetNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, decoder_sequence_length, hidden_size)) — Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
last_hidden_state_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) — Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
The XLMProphetNetDecoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMProphetNetDecoder
import torch
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
model = XLMProphetNetDecoder.from_pretrained(
... "patrickvonplaten/xprophetnet-large-uncased-standalone", add_cross_attention=False
... )
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
XLMProphetNetForConditionalGeneration
class transformers.XLMProphetNetForConditionalGeneration
<
source
>
(
config: XLMProphetNetConfig
)
Parameters
config (XLMProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The XLMProphetNet Model with a language modeling head. Can be used for sequence generation tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
XLMProphetNet uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMProphetNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, decoder_sequence_length, config.vocab_size)) — Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) — Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, encoder_sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length). Attentions weights of the encoder, after the attention
softmax, used to compute the weighted average in the self-attention heads.
The XLMProphetNetForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMProphetNetForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
model = XLMProphetNetForConditionalGeneration.from_pretrained(
... "patrickvonplaten/xprophetnet-large-uncased-standalone"
... )
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
logits_next_token = outputs.logits # logits to predict next token as usual
logits_ngram_next_tokens = outputs.logits_ngram # logits to predict 2nd, 3rd, ... next tokens
XLMProphetNetForCausalLM
class transformers.XLMProphetNetForCausalLM
<
source
>
(
config: XLMProphetNetConfig
)
Parameters
config (XLMProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The standalone decoder part of the XLMProphetNetModel with a lm head on top. The model can be used for causal language modeling.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
Returns
transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput or tuple(torch.FloatTensor)
A transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetDecoderLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (XLMProphetNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, decoder_sequence_length, config.vocab_size)) — Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) — Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
The XLMProphetNetForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, XLMProphetNetForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
model = XLMProphetNetForCausalLM.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# Model can also be used with EncoderDecoder framework
from transformers import BertTokenizer, EncoderDecoderModel, AutoTokenizer
import torch
tokenizer_enc = BertTokenizer.from_pretrained("bert-large-uncased")
tokenizer_dec = AutoTokenizer.from_pretrained("patrickvonplaten/xprophetnet-large-uncased-standalone")
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
... "bert-large-uncased", "patrickvonplaten/xprophetnet-large-uncased-standalone"
... )
ARTICLE = (
... "the us state department said wednesday it had received no "
... "formal word from bolivia that it was expelling the us ambassador there "
... "but said the charges made against him are `` baseless ."
... )
input_ids = tokenizer_enc(ARTICLE, return_tensors="pt").input_ids
labels = tokenizer_dec(
... "us rejects charges against its ambassador in bolivia", return_tensors="pt"
... ).input_ids
outputs = model(input_ids=input_ids, decoder_input_ids=labels[:, :-1], labels=labels[:, 1:])
loss = outputs.loss
←XLM
XLM-RoBERTa→
XLM-ProphetNet
Overview
Documentation resources
XLMProphetNetConfig
XLMProphetNetTokenizer
XLMProphetNetModel
XLMProphetNetEncoder
XLMProphetNetDecoder
XLMProphetNetForConditionalGeneration
XLMProphetNetForCausalLM
|
MegatronGPT2
Overview
The MegatronGPT2 model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
Tips:
We have provided pretrained GPT2-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
Copied
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O
megatron_gpt2_345m_v0_0.zip
Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily
be loaded by Hugging Face Transformers GPT2 implementation.
The following command allows you to do the conversion. We assume that the folder models/megatron_gpt2 contains
megatron_gpt2_345m_v0_0.zip and that the command is run from that folder:
Copied
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip
This model was contributed by jdemouth. The original code can be found here. That repository contains a multi-GPU and multi-node implementation of the
Megatron Language models. In particular, it contains a hybrid model parallel approach using “tensor parallel” and
“pipeline parallel” techniques.
←MegatronBERT
mLUKE→
MegatronGPT2
Overview
|
Hubert
Overview
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan
Salakhutdinov, Abdelrahman Mohamed.
The abstract from the paper is the following:
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are
multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training
phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we
propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an
offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our
approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined
acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised
clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means
teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the
state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h,
10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER
reduction on the more challenging dev-other and test-other evaluation subsets.
Tips:
Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using Wav2Vec2CTCTokenizer.
This model was contributed by patrickvonplaten.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
HubertConfig
class transformers.HubertConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_layer_norm = True
feat_proj_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
do_stable_layer_norm = False
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
ctc_loss_reduction = 'sum'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the Hubert model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling HubertModel. Vocabulary size of the model. Defines the different
tokens that can be represented by the inputs_ids passed to the forward method of HubertModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout(float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout(float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probabilitiy for the final projection layer of Wav2Vec2ForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_proj_layer_norm (bool, optional, defaults to True) —
Whether to apply LayerNorm to the output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
conv_dim (Tuple[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (bool, optional, defaults to False) —
Whether do apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of HubertForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of HubertForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of HubertForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
This is the configuration class to store the configuration of a HubertModel. It is used to instantiate an
Hubert model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Hubert
facebook/hubert-base-ls960 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import HubertModel, HubertConfig
# Initializing a Hubert facebook/hubert-base-ls960 style configuration
configuration = HubertConfig()
# Initializing a model from the facebook/hubert-base-ls960 style configuration
model = HubertModel(configuration)
# Accessing the model configuration
configuration = model.config
HubertModel
class transformers.HubertModel
<
source
>
(
config: HubertConfig
)
Parameters
config (HubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Hubert Model transformer outputting raw hidden-states without any specific head on top.
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden
Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia,
Ruslan Salakhutdinov, Abdelrahman Mohamed.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
hubert-base, attention_mask should not be passed
to avoid degraded performance when doing batched inference. For such models input_values should simply be
padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different
results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (HubertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The HubertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, HubertModel
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
hidden_states = model(input_values).last_hidden_state
HubertForCTC
class transformers.HubertForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (HubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Hubert Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden
Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia,
Ruslan Salakhutdinov, Abdelrahman Mohamed.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
hubert-base, attention_mask should not be passed
to avoid degraded performance when doing batched inference. For such models input_values should simply be
padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different
results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (HubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The HubertForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, HubertForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
22.68
HubertForSequenceClassification
class transformers.HubertForSequenceClassification
<
source
>
(
config
)
Parameters
config (HubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Hubert Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like
SUPERB Keyword Spotting.
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden
Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia,
Ruslan Salakhutdinov, Abdelrahman Mohamed.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
hubert-base, attention_mask should not be passed
to avoid degraded performance when doing batched inference. For such models input_values should simply be
padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different
results depending on whether input_values is padded or not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (HubertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The HubertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, HubertForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("superb/hubert-base-superb-ks")
model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-ks")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'_unknown_'
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
round(loss.item(), 2)
8.53
TFHubertModel
class transformers.TFHubertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (HubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare TFHubert Model transformer outputing raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_values only and nothing else: model(input_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_values, attention_mask]) or model([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_values": input_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_values: tf.Tensor
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_values you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (HubertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFHubertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, TFHubertModel
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = TFHubertModel.from_pretrained("facebook/hubert-large-ls960-ft")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="tf").input_values # Batch size 1
hidden_states = model(input_values).last_hidden_state
TFHubertForCTC
class transformers.TFHubertForCTC
<
source
>
(
*args
**kwargs
)
Parameters
config (HubertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
TFHubert Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_values only and nothing else: model(input_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_values, attention_mask]) or model([input_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_values": input_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_values: tf.Tensor
attention_mask: tf.Tensor | None = None
token_type_ids: tf.Tensor | None = None
position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
inputs_embeds: tf.Tensor | None = None
output_attentions: Optional[bool] = None
labels: tf.Tensor | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
Parameters
input_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape ({0}), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_values you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_values docstring) Tokens with indices set to -100 are ignored (masked),
the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (HubertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFHubertForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import AutoProcessor, TFHubertForCTC
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = TFHubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
def map_to_array(batch):
... speech, _ = sf.read(batch["file"])
... batch["speech"] = speech
... return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = processor(ds["speech"][0], return_tensors="tf").input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = tf.argmax(logits, axis=-1)
transcription = processor.decode(predicted_ids[0])
# compute loss
target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"
# Pass the transcription as text to encode labels
labels = processor(text=transcription, return_tensors="tf").input_values
loss = model(input_values, labels=labels).loss
←EnCodec
MCTCT→
Hubert
Overview
Documentation resources
HubertConfig
HubertModel
HubertForCTC
HubertForSequenceClassification
TFHubertModel
TFHubertForCTC
|
RAG
Overview
Retrieval-augmented generation (“RAG”) models combine the powers of pretrained dense retrieval (DPR) and
sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate
outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing
both retrieval and generation to adapt to downstream tasks.
It is based on the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
The abstract from the paper is the following:
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve
state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely
manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind
task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge
remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric
memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a
general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained
parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a
pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages
across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks,
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation
tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art
parametric-only seq2seq baseline.
This model was contributed by ola13.
Tips:
Retrieval-augmented generation (“RAG”) models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models. RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks.
RagConfig
class transformers.RagConfig
<
source
>
(
vocab_size = None
is_encoder_decoder = True
prefix = None
bos_token_id = None
pad_token_id = None
eos_token_id = None
decoder_start_token_id = None
title_sep = ' / '
doc_sep = ' // '
n_docs = 5
max_combined_length = 300
retrieval_vector_size = 768
retrieval_batch_size = 8
dataset = 'wiki_dpr'
dataset_split = 'train'
index_name = 'compressed'
index_path = None
passages_path = None
use_dummy_dataset = False
reduce_loss = False
label_smoothing = 0.0
do_deduplication = True
exclude_bos_score = False
do_marginalize = False
output_retrieved = False
use_cache = True
forced_eos_token_id = None
**kwargs
)
Parameters
title_sep (str, optional, defaults to " / ") —
Separator inserted between the title and the text of the retrieved document when calling RagRetriever.
doc_sep (str, optional, defaults to " // ") —
Separator inserted between the text of the retrieved document and the original input when calling
RagRetriever.
n_docs (int, optional, defaults to 5) —
Number of documents to retrieve.
max_combined_length (int, optional, defaults to 300) —
Max length of contextualized input returned by __call__().
retrieval_vector_size (int, optional, defaults to 768) —
Dimensionality of the document embeddings indexed by RagRetriever.
retrieval_batch_size (int, optional, defaults to 8) —
Retrieval batch size, defined as the number of queries issues concurrently to the faiss index encapsulated
RagRetriever.
dataset (str, optional, defaults to "wiki_dpr") —
A dataset identifier of the indexed dataset in HuggingFace Datasets (list all available datasets and ids
using datasets.list_datasets()).
dataset_split (str, optional, defaults to "train") —
Which split of the dataset to load.
index_name (str, optional, defaults to "compressed") —
The index name of the index associated with the dataset. One can choose between "legacy", "exact" and
"compressed".
index_path (str, optional) —
The path to the serialized faiss index on disk.
passages_path (str, optional) —
A path to text passages compatible with the faiss index. Required if using
LegacyIndex
use_dummy_dataset (bool, optional, defaults to False) —
Whether to load a “dummy” variant of the dataset specified by dataset.
label_smoothing (float, optional, defaults to 0.0) —
Only relevant if return_loss is set to True. Controls the epsilon parameter value for label smoothing
in the loss calculation. If set to 0, no label smoothing is performed.
do_marginalize (bool, optional, defaults to False) —
If True, the logits are marginalized over all documents by making use of
torch.nn.functional.log_softmax.
reduce_loss (bool, optional, defaults to False) —
Whether or not to reduce the NLL loss using the torch.Tensor.sum operation.
do_deduplication (bool, optional, defaults to True) —
Whether or not to deduplicate the generations from different context documents for a given input. Has to be
set to False if used while training with distributed backend.
exclude_bos_score (bool, optional, defaults to False) —
Whether or not to disregard the BOS token when computing the loss.
output_retrieved(bool, optional, defaults to False) —
If set to True, retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask are returned. See returned tensors for more detail.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
forced_eos_token_id (int, optional) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
RagConfig stores the configuration of a RagModel. Configuration objects inherit from PretrainedConfig and
can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
from_question_encoder_generator_configs
<
source
>
(
question_encoder_config: PretrainedConfig
generator_config: PretrainedConfig
**kwargs
)
→
EncoderDecoderConfig
Returns
EncoderDecoderConfig
An instance of a configuration object
Instantiate a EncoderDecoderConfig (or a derived class) from a pre-trained encoder model configuration and
decoder model configuration.
to_dict
<
source
>
(
)
→
Dict[str, any]
Returns
Dict[str, any]
Dictionary of all the attributes that make up this configuration instance,
Serializes this instance to a Python dictionary. Override the default to_dict().
RagTokenizer
class transformers.RagTokenizer
<
source
>
(
question_encoder
generator
)
Rag specific outputs
class transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
doc_scores: FloatTensor = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
retrieved_doc_embeds: typing.Optional[torch.FloatTensor] = None
retrieved_doc_ids: typing.Optional[torch.LongTensor] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask: typing.Optional[torch.LongTensor] = None
question_encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
question_enc_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
question_enc_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_enc_last_hidden_state: typing.Optional[torch.FloatTensor] = None
generator_enc_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_enc_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_dec_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_dec_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
retrieved_doc_embeds (torch.FloatTensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) —
Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (torch.LongTensor of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) —
The indexes of the embedded documents retrieved by the retriever.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Base class for retriever augmented marginalized models outputs.
class transformers.models.rag.modeling_rag.RetrievAugLMOutput
<
source
>
(
logits: FloatTensor = None
doc_scores: FloatTensor = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
retrieved_doc_embeds: typing.Optional[torch.FloatTensor] = None
retrieved_doc_ids: typing.Optional[torch.LongTensor] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask: typing.Optional[torch.LongTensor] = None
question_encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
question_enc_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
question_enc_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_enc_last_hidden_state: typing.Optional[torch.FloatTensor] = None
generator_enc_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_enc_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_dec_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_dec_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
generator_cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
retrieved_doc_embeds (torch.FloatTensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) —
Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (torch.LongTensor of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) —
The indexes of the embedded documents retrieved by the retriever.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
RagRetriever
class transformers.RagRetriever
<
source
>
(
config
question_encoder_tokenizer
generator_tokenizer
index = None
init_retrieval = True
)
Parameters
config (RagConfig) —
The configuration of the RAG model this Retriever is used with. Contains parameters indicating which
Index to build. You can load your own custom dataset with config.index_name="custom" or use a canonical
one (default) from the datasets library with config.index_name="wiki_dpr" for example.
question_encoder_tokenizer (PreTrainedTokenizer) —
The tokenizer that was used to tokenize the question. It is used to decode the question and then use the
generator_tokenizer.
generator_tokenizer (PreTrainedTokenizer) —
The tokenizer used for the generator part of the RagModel.
index (Index, optional, defaults to the one defined by the configuration) —
If specified, use this index instead of the one built using the configuration
Retriever used to get documents from vector queries. It retrieves the documents embeddings as well as the documents
contents, and it formats them to be used with a RagModel.
Examples:
Copied
# To load the default "wiki_dpr" dataset with 21M passages from wikipedia (index name is 'compressed' or 'exact')
from transformers import RagRetriever
retriever = RagRetriever.from_pretrained(
... "facebook/dpr-ctx_encoder-single-nq-base", dataset="wiki_dpr", index_name="compressed"
... )
# To load your own indexed dataset built with the datasets library. More info on how to build the indexed dataset in examples/rag/use_own_knowledge_dataset.py
from transformers import RagRetriever
dataset = (
... ...
... ) # dataset must be a datasets.Datasets object with columns "title", "text" and "embeddings", and it must have a faiss index
retriever = RagRetriever.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", indexed_dataset=dataset)
# To load your own indexed dataset built with the datasets library that was saved on disk. More info in examples/rag/use_own_knowledge_dataset.py
from transformers import RagRetriever
dataset_path = "path/to/my/dataset" # dataset saved via *dataset.save_to_disk(...)*
index_path = "path/to/my/index.faiss" # faiss index saved via *dataset.get_index("embeddings").save(...)*
retriever = RagRetriever.from_pretrained(
... "facebook/dpr-ctx_encoder-single-nq-base",
... index_name="custom",
... passages_path=dataset_path,
... index_path=index_path,
... )
# To load the legacy index built originally for Rag's paper
from transformers import RagRetriever
retriever = RagRetriever.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", index_name="legacy")
init_retrieval
<
source
>
(
)
Retriever initialization function. It loads the index into memory.
postprocess_docs
<
source
>
(
docs
input_strings
prefix
n_docs
return_tensors = None
)
→
tuple(tensors)
Parameters
docs (dict) —
Retrieved documents.
input_strings (str) —
Input strings decoded by preprocess_query.
prefix (str) —
Prefix added at the beginning of each input, typically used with T5-based models.
Returns
tuple(tensors)
a tuple consisting of two elements: contextualized input_ids and a compatible
attention_mask.
Postprocessing retrieved docs and combining them with input_strings.
retrieve
<
source
>
(
question_hidden_states: ndarray
n_docs: int
)
→
Tuple[np.ndarray, np.ndarray, List[dict]]
Parameters
question_hidden_states (np.ndarray of shape (batch_size, vector_size)) —
A batch of query vectors to retrieve with.
n_docs (int) —
The number of docs retrieved per query.
Returns
Tuple[np.ndarray, np.ndarray, List[dict]]
A tuple with the following objects:
retrieved_doc_embeds (np.ndarray of shape (batch_size, n_docs, dim)) — The retrieval embeddings
of the retrieved docs per query.
doc_ids (np.ndarray of shape (batch_size, n_docs)) — The ids of the documents in the index
doc_dicts (List[dict]): The retrieved_doc_embeds examples per query.
Retrieves documents for specified question_hidden_states.
RagModel
class transformers.RagModel
<
source
>
(
config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None
question_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
generator: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
retriever: typing.Optional[transformers.models.rag.retrieval_rag.RagRetriever] = None
**kwargs
)
Parameters
config (RagConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
question_encoder (PreTrainedModel) —
An encoder model compatible with the faiss index encapsulated by the retriever.
generator (PreTrainedModel) —
A seq2seq model used as the generator in the RAG architecture.
retriever (RagRetriever) —
A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The RagModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward
pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context
documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
The question encoder can be any autoencoding model, preferably DPRQuestionEncoder, and the generator can be
any seq2seq model, preferably BartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
autoencoding model as the question_encoder and any seq2seq model with language model head as the generator.
It has been tested with DPRQuestionEncoder as the question_encoder and BartForConditionalGeneration or
T5ForConditionalGeneration as the generator.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
doc_scores: typing.Optional[torch.FloatTensor] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_retrieved: typing.Optional[bool] = None
n_docs: typing.Optional[int] = None
)
→
transformers.models.rag.modeling_rag.RetrievAugLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies
which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to
obtain the indices.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (generator_enc_last_hidden_state, optional: generator_enc_hidden_states,
optional: generator_enc_attentions). generator_enc_last_hidden_state of shape (batch_size, n_docs * sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the
generator’s encoder.
Used by the (RagModel) model during decoding.
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for generation tasks. None by default, construct as per instructions for the generator model
you’re using with your RAG instance.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
past_key_values (tuple(tuple(torch.FloatTensor))) —
Tuple consists of two elements: encoder_outputs of the RAG model (see encoder_outputs) and
past_key_values of the underlying generator. Can be used to speed up decoding. past_key_values are used
in the (RagTokenForGeneration) model during decoding.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever doc_scores
has to be provided to the forward pass. doc_scores can be computed via
question_encoder_last_hidden_state and retrieved_doc_embeds, see examples for more information.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever `context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__(). context_attention_mask
(torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional,
returned when output_retrieved=True): Attention mask post-processed from the retrieved documents and the
question encoder input_ids by the retriever.
If the model has is not initialized with a retriever context_attention_mask has to be provided to the
forward pass. context_attention_mask are returned by __call__().
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_retrieved(bool, optional) —
Whether or not to return the retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask. See returned tensors for more detail.
n_docs (int, optional, defaults to `config.n_docs“) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
Returns
transformers.models.rag.modeling_rag.RetrievAugLMOutput or tuple(torch.FloatTensor)
A transformers.models.rag.modeling_rag.RetrievAugLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RagConfig) and inputs.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) — Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
retrieved_doc_embeds (torch.FloatTensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (torch.LongTensor of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The RagModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RagRetriever, RagModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base")
retriever = RagRetriever.from_pretrained(
... "facebook/rag-token-base", index_name="exact", use_dummy_dataset=True
... )
# initialize with RagRetriever to do everything in one forward call
model = RagModel.from_pretrained("facebook/rag-token-base", retriever=retriever)
inputs = tokenizer("How many people live in Paris?", return_tensors="pt")
outputs = model(input_ids=inputs["input_ids"])
RagSequenceForGeneration
class transformers.RagSequenceForGeneration
<
source
>
(
config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None
question_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
generator: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
retriever: typing.Optional[transformers.models.rag.retrieval_rag.RagRetriever] = None
**kwargs
)
Parameters
config (RagConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
question_encoder (PreTrainedModel) —
An encoder model compatible with the faiss index encapsulated by the retriever.
generator (PreTrainedModel) —
A seq2seq model used as the generator in the RAG architecture.
retriever (RagRetriever) —
A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The RagSequenceForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
A RAG-sequence model implementation. It performs RAG-sequence specific marginalization in the forward pass.
RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward
pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context
documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
The question encoder can be any autoencoding model, preferably DPRQuestionEncoder, and the generator can be
any seq2seq model, preferably BartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
autoencoding model as the question_encoder and any seq2seq model with language model head as the generator.
It has been tested with DPRQuestionEncoder as the question_encoder and BartForConditionalGeneration or
T5ForConditionalGeneration as the generator.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask: typing.Optional[torch.LongTensor] = None
doc_scores: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_retrieved: typing.Optional[bool] = None
exclude_bos_score: typing.Optional[bool] = None
reduce_loss: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
n_docs: typing.Optional[int] = None
**kwargs
)
→
transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies
which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to
obtain the indices.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (generator_enc_last_hidden_state, optional: generator_enc_hidden_states,
optional: generator_enc_attentions). generator_enc_last_hidden_state of shape (batch_size, n_docs * sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the
generator’s encoder.
Used by the (RagModel) model during decoding.
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for generation tasks. None by default, construct as per instructions for the generator model
you’re using with your RAG instance.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
past_key_values (tuple(tuple(torch.FloatTensor))) —
Tuple consists of two elements: encoder_outputs of the RAG model (see encoder_outputs) and
past_key_values of the underlying generator. Can be used to speed up decoding. past_key_values are used
in the (RagTokenForGeneration) model during decoding.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever doc_scores
has to be provided to the forward pass. doc_scores can be computed via
question_encoder_last_hidden_state and retrieved_doc_embeds, see examples for more information.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever `context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__(). context_attention_mask
(torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional,
returned when output_retrieved=True): Attention mask post-processed from the retrieved documents and the
question encoder input_ids by the retriever.
If the model has is not initialized with a retriever context_attention_mask has to be provided to the
forward pass. context_attention_mask are returned by __call__().
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_retrieved(bool, optional) —
Whether or not to return the retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask. See returned tensors for more detail.
n_docs (int, optional, defaults to `config.n_docs“) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
exclude_bos_score (bool, optional) —
Only relevant if labels is passed. If True, the score of the BOS token is disregarded when computing
the loss.
reduce_loss (bool, optional) —
Only relevant if labels is passed. If True, the NLL loss is reduced using the torch.Tensor.sum
operation.
kwargs (Dict[str, any], optional, defaults to {}) —
Legacy dictionary, which is required so that model can use generate() function.
Returns
transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput or tuple(torch.FloatTensor)
A transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RagConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) — Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
retrieved_doc_embeds (torch.FloatTensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (torch.LongTensor of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The RagSequenceForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RagRetriever, RagSequenceForGeneration
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained(
... "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True
... )
# initialize with RagRetriever to do everything in one forward call
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
inputs = tokenizer("How many people live in Paris?", return_tensors="pt")
targets = tokenizer(text_target="In Paris, there are 10 million people.", return_tensors="pt")
input_ids = inputs["input_ids"]
labels = targets["input_ids"]
outputs = model(input_ids=input_ids, labels=labels)
# or use retriever separately
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", use_dummy_dataset=True)
# 1. Encode
question_hidden_states = model.question_encoder(input_ids)[0]
# 2. Retrieve
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt")
doc_scores = torch.bmm(
... question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)
... ).squeeze(1)
# 3. Forward to generator
outputs = model(
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... decoder_input_ids=labels,
... )
generate
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask: typing.Optional[torch.LongTensor] = None
doc_scores: typing.Optional[torch.FloatTensor] = None
do_deduplication: typing.Optional[bool] = None
num_return_sequences: typing.Optional[int] = None
num_beams: typing.Optional[int] = None
n_docs: typing.Optional[int] = None
**model_kwargs
)
→
torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt for the generation. If input_ids is not passed, then
context_input_ids has to be provided.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model is not initialized with a retriever or input_ids is not given, context_input_ids and
context_attention_mask have to be provided to the forward pass. They are returned by
__call__().
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
If the model is not initialized with a retriever or input_ids is not given, doc_scores has to be
provided to the forward pass. doc_scores are returned by __call__().
do_deduplication (bool, optional) —
Whether or not to deduplicate the generations from different context documents for a given input. Has
to be set to False if used while training with distributed backend.
num_return_sequences(int, optional, defaults to 1) —
The number of independently computed returned sequences for each element in the batch. Note that this
is not the value we pass to the generator’s [generate()](/docs/transformers/v4.31.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) function,
where we set num_return_sequences to num_beams.
num_beams (int, optional, defaults to 1) —
Number of beams for beam search. 1 means no beam search.
n_docs (int, optional, defaults to config.n_docs) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
kwargs (Dict[str, Any], optional) —
Additional kwargs will be passed to generate().
Returns
torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length)
The generated
sequences. The second dimension (sequence length) is either equal to max_length or shorter if all batches
finished early due to the eos_token_id.
Implements RAG sequence “thorough” decoding. Read the generate()` documentation
for more information on how to set other generate input parameters.
RagTokenForGeneration
class transformers.RagTokenForGeneration
<
source
>
(
config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None
question_encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
generator: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None
retriever: typing.Optional[transformers.models.rag.retrieval_rag.RagRetriever] = None
**kwargs
)
Parameters
config (RagConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
question_encoder (PreTrainedModel) —
An encoder model compatible with the faiss index encapsulated by the retriever.
generator (PreTrainedModel) —
A seq2seq model used as the generator in the RAG architecture.
retriever (RagRetriever) —
A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The RagTokenForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
A RAG-token model implementation. It performs RAG-token specific marginalization in the forward pass.
RAG is a seq2seq model which encapsulates two core components: a question encoder and a generator. During a forward
pass, we encode the input with the question encoder and pass it to the retriever to extract relevant context
documents. The documents are then prepended to the input. Such contextualized inputs is passed to the generator.
The question encoder can be any autoencoding model, preferably DPRQuestionEncoder, and the generator can be
any seq2seq model, preferably BartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
autoencoding model as the question_encoder and any seq2seq model with language model head as the generator.
It has been tested with DPRQuestionEncoder as the question_encoder and BartForConditionalGeneration or
T5ForConditionalGeneration as the generator.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask: typing.Optional[torch.LongTensor] = None
doc_scores: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
output_retrieved: typing.Optional[bool] = None
do_marginalize: typing.Optional[bool] = None
reduce_loss: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
n_docs: typing.Optional[int] = None
**kwargs
)
→
transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies
which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to
obtain the indices.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (generator_enc_last_hidden_state, optional: generator_enc_hidden_states,
optional: generator_enc_attentions). generator_enc_last_hidden_state of shape (batch_size, n_docs * sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the
generator’s encoder.
Used by the (RagModel) model during decoding.
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Provide for generation tasks. None by default, construct as per instructions for the generator model
you’re using with your RAG instance.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
past_key_values (tuple(tuple(torch.FloatTensor))) —
Tuple consists of two elements: encoder_outputs of the RAG model (see encoder_outputs) and
past_key_values of the underlying generator. Can be used to speed up decoding. past_key_values are used
in the (RagTokenForGeneration) model during decoding.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever doc_scores
has to be provided to the forward pass. doc_scores can be computed via
question_encoder_last_hidden_state and retrieved_doc_embeds, see examples for more information.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever `context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__(). context_attention_mask
(torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional,
returned when output_retrieved=True): Attention mask post-processed from the retrieved documents and the
question encoder input_ids by the retriever.
If the model has is not initialized with a retriever context_attention_mask has to be provided to the
forward pass. context_attention_mask are returned by __call__().
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_retrieved(bool, optional) —
Whether or not to return the retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask. See returned tensors for more detail.
n_docs (int, optional, defaults to `config.n_docs“) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
do_marginalize (bool, optional) —
If True, the logits are marginalized over all documents by making use of
torch.nn.functional.log_softmax.
reduce_loss (bool, optional) —
Only relevant if labels is passed. If True, the NLL loss is reduced using the torch.Tensor.sum
operation.
kwargs (Dict[str, any], optional, defaults to {}) —
Legacy dictionary, which is required so that model can use generate() function.
Returns
transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput or tuple(torch.FloatTensor)
A transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RagConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) — Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
retrieved_doc_embeds (torch.FloatTensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (torch.LongTensor of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever.
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross-attentions weights of the generator decoder, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The RagTokenForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RagRetriever, RagTokenForGeneration
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained(
... "facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True
... )
# initialize with RagRetriever to do everything in one forward call
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
inputs = tokenizer("How many people live in Paris?", return_tensors="pt")
targets = tokenizer(text_target="In Paris, there are 10 million people.", return_tensors="pt")
input_ids = inputs["input_ids"]
labels = targets["input_ids"]
outputs = model(input_ids=input_ids, labels=labels)
# or use retriever separately
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", use_dummy_dataset=True)
# 1. Encode
question_hidden_states = model.question_encoder(input_ids)[0]
# 2. Retrieve
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt")
doc_scores = torch.bmm(
... question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)
... ).squeeze(1)
# 3. Forward to generator
outputs = model(
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... decoder_input_ids=labels,
... )
# or directly generate
generated = model.generate(
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... )
generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True)
generate
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
context_input_ids: typing.Optional[torch.LongTensor] = None
context_attention_mask: typing.Optional[torch.LongTensor] = None
doc_scores: typing.Optional[torch.FloatTensor] = None
n_docs: typing.Optional[int] = None
generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None
prefix_allowed_tokens_fn: typing.Callable[[int, torch.Tensor], typing.List[int]] = None
logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = []
stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = []
**kwargs
)
→
torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt for the generation. If input_ids is not passed, then
context_input_ids has to be provided.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
context_input_ids (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever, context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__().
context_attention_mask (torch.LongTensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever, context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__().
doc_scores (torch.FloatTensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
If the model has is not initialized with a retriever, context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__().
n_docs (int, optional, defaults to config.n_docs) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
generation_config (~generation.GenerationConfig, optional) —
The generation configuration to be used as base parametrization for the generation call. **kwargs
passed to generate matching the attributes of generation_config will override them. If
generation_config is not provided, the default will be used, which has the following loading
priority: 1) from the generation_config.json model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit GenerationConfig’s
default values, whose documentation should be checked to parameterize generation.
prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]], optional) —
If provided, this function constraints the beam search to allowed tokens only at each step. If not
provided no constraint is applied. This function takes 2 arguments inputs_ids and the batch ID
batch_id. It has to return a list with the allowed tokens for the next generation step conditioned on
the previously generated tokens inputs_ids and the batch ID batch_id. This argument is useful for
constrained generation conditioned on the prefix, as described in Autoregressive Entity
Retrieval.
logits_processor (LogitsProcessorList, optional) —
Custom logits processors that complement the default logits processors built from arguments and a
model’s config. If a logit processor is passed that is already created with the arguments or a model’s
config an error is thrown.
stopping_criteria (StoppingCriteriaList, optional) —
Custom stopping criteria that complement the default stopping criteria built from arguments and a
model’s config. If a stopping criteria is passed that is already created with the arguments or a
model’s config an error is thrown.
kwargs (Dict[str, Any], optional) —
Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be
forwarded to the forward function of the model.
Returns
torch.LongTensor of shape (batch_size * num_return_sequences, sequence_length)
The generated
sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches
finished early due to the eos_token_id.
Implements RAG token decoding.
TFRagModel
class transformers.TFRagModel
<
source
>
(
*args
**kwargs
)
Parameters
config (RagConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
question_encoder (TFPreTrainedModel) —
An encoder model compatible with the faiss index encapsulated by the retriever.
generator (TFPreTrainedModel) —
A seq2seq model used as the generator in the RAG architecture.
retriever (RagRetriever) —
A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The TFRagModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
RAG is a sequence-to-sequence model which encapsulates two core components: a question encoder and a generator.
During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract
relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to
the generator.
The question encoder can be any autoencoding model, preferably TFDPRQuestionEncoder, and the generator can be
any seq2seq model, preferably TFBartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
autoencoding model as the question_encoder and any seq2seq model with language model head as the generator.
It has been tested with TFDPRQuestionEncoder as the question_encoder and TFBartForConditionalGeneration
as the generator.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Tensorflow tf.keras.Model
subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to
general usage and behavior.
The model is in a developing state as it is now fully supports in eager-mode only, and may not be exported in
SavedModel format.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
doc_scores: np.ndarray | tf.Tensor | None = None
context_input_ids: np.ndarray | tf.Tensor | None = None
context_attention_mask: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
output_retrieved: Optional[bool] = None
n_docs: Optional[int] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies
which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to
obtain the indices.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_outputs (tuple(tuple(tf.Tensor), optional) —
Tuple consists of (generator_enc_last_hidden_state, optional: generator_enc_hidden_states,
optional: generator_enc_attentions). generator_enc_last_hidden_state of shape (batch_size, n_docs * sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the
generator’s encoder.
Used by the (TFRagModel) model during decoding.
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Provide for generation tasks. None by default, construct as per instructions for the generator model
you’re using with your RAG instance.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
past_key_values (tuple(tuple(tf.Tensor))) —
Tuple consists of two elements: encoder_outputs of the RAG model (see encoder_outputs) and
past_key_values of the underlying generator. Can be used to speed up decoding. past_key_values are used
in the (RagTokenForGeneration) model during decoding.
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever doc_scores
has to be provided to the forward pass. doc_scores can be computed via
question_encoder_last_hidden_state and retrieved_doc_embeds, see examples for more information.
context_input_ids (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever `context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__(). context_attention_mask
(tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when
output_retrieved=True): Attention mask post-processed from the retrieved documents and the question
encoder input_ids by the retriever.
If the model has is not initialized with a retriever context_attention_mask has to be provided to the
forward pass. context_attention_mask are returned by __call__().
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_retrieved(bool, optional) —
Whether or not to return the retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask. See returned tensors for more detail.
return_dict (bool, optional) —
Whether or not to return a TFRetrievAugLMOutput instead of a plain tuple.
n_docs (int, optional, defaults to `config.n_docs“) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
Returns
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMOutput or tuple(tf.Tensor)
A transformers.models.rag.modeling_tf_rag.TFRetrievAugLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RagConfig) and inputs.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) — Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
retrieved_doc_embeds (tf.Tensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (tf.Tensor of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever.
context_input_ids (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
The TFRagModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RagRetriever, TFRagModel
import torch
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-base")
retriever = RagRetriever.from_pretrained(
... "facebook/rag-token-base", index_name="exact", use_dummy_dataset=True
... )
# initialize with RagRetriever to do everything in one forward call
model = TFRagModel.from_pretrained("facebook/rag-token-base", retriever=retriever, from_pt=True)
input_dict = tokenizer.prepare_seq2seq_batch(
... "How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="tf"
... )
input_ids = input_dict["input_ids"]
outputs = model(input_ids)
TFRagSequenceForGeneration
class transformers.TFRagSequenceForGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (RagConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
question_encoder (TFPreTrainedModel) —
An encoder model compatible with the faiss index encapsulated by the retriever.
generator (TFPreTrainedModel) —
A seq2seq model used as the generator in the RAG architecture.
retriever (RagRetriever) —
A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The TFRagSequenceForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
A TF RAG-sequence model implementation. It performs RAG-sequence specific marginalization in the forward pass.
RAG is a sequence-to-sequence model which encapsulates two core components: a question encoder and a generator.
During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract
relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to
the generator.
The question encoder can be any autoencoding model, preferably TFDPRQuestionEncoder, and the generator can be
any seq2seq model, preferably TFBartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
autoencoding model as the question_encoder and any seq2seq model with language model head as the generator.
It has been tested with TFDPRQuestionEncoder as the question_encoder and TFBartForConditionalGeneration
as the generator.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Tensorflow tf.keras.Model
subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to
general usage and behavior.
The model is in a developing state as it is now fully supports in eager-mode only, and may not be exported in
SavedModel format.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
doc_scores: np.ndarray | tf.Tensor | None = None
context_input_ids: np.ndarray | tf.Tensor | None = None
context_attention_mask: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
output_retrieved: Optional[bool] = None
n_docs: Optional[int] = None
exclude_bos_score: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
reduce_loss: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies
which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to
obtain the indices.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_outputs (tuple(tuple(tf.Tensor), optional) —
Tuple consists of (generator_enc_last_hidden_state, optional: generator_enc_hidden_states,
optional: generator_enc_attentions). generator_enc_last_hidden_state of shape (batch_size, n_docs * sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the
generator’s encoder.
Used by the (TFRagModel) model during decoding.
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Provide for generation tasks. None by default, construct as per instructions for the generator model
you’re using with your RAG instance.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
past_key_values (tuple(tuple(tf.Tensor))) —
Tuple consists of two elements: encoder_outputs of the RAG model (see encoder_outputs) and
past_key_values of the underlying generator. Can be used to speed up decoding. past_key_values are used
in the (RagTokenForGeneration) model during decoding.
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever doc_scores
has to be provided to the forward pass. doc_scores can be computed via
question_encoder_last_hidden_state and retrieved_doc_embeds, see examples for more information.
context_input_ids (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever `context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__(). context_attention_mask
(tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when
output_retrieved=True): Attention mask post-processed from the retrieved documents and the question
encoder input_ids by the retriever.
If the model has is not initialized with a retriever context_attention_mask has to be provided to the
forward pass. context_attention_mask are returned by __call__().
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_retrieved(bool, optional) —
Whether or not to return the retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask. See returned tensors for more detail.
return_dict (bool, optional) —
Whether or not to return a TFRetrievAugLMOutput instead of a plain tuple.
n_docs (int, optional, defaults to `config.n_docs“) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
exclude_bos_score (bool, optional) —
Only relevant if labels is passed. If True, the score of the BOS token is disregarded when computing
the loss.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss according to Rag-Sequence model formulation See
https://arxiv.org/pdf/2005.11401.pdf Section 2.1 for details about Rag-Sequence formulation. Indices should
be in [0, ..., config.vocab_size - 1].
reduce_loss (bool, optional) —
Only relevant if labels is passed. If True, the NLL loss is reduced using the tf.Tensor.sum
operation.
kwargs (Dict[str, any], optional, defaults to {}) —
Legacy dictionary, which is required so that model can use generate() function.
Returns
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput or tuple(tf.Tensor)
A transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RagConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) — Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
retrieved_doc_embeds (tf.Tensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (tf.Tensor (int32) of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever.
context_input_ids (tf.Tensor(int32) of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (tf.Tensor (int32) of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
The TFRagSequenceForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RagRetriever, TFRagSequenceForGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained(
... "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True
... )
# initialize with RagRetriever to do everything in one forward call
model = TFRagSequenceForGeneration.from_pretrained(
... "facebook/rag-sequence-nq", retriever=retriever, from_pt=True
... )
input_dict = tokenizer.prepare_seq2seq_batch(
... "How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="tf"
... )
outputs = model(input_dict, output_retrieved=True)
# or use retriever separately
# 1. Encode
input_ids = input_dict["input_ids"]
question_hidden_states = model.question_encoder(input_ids)[0]
# 2. Retrieve
docs_dict = retriever(input_ids.numpy(), question_hidden_states.numpy(), return_tensors="tf")
doc_scores = tf.squeeze(
... tf.matmul(
... tf.expand_dims(question_hidden_states, axis=1), docs_dict["retrieved_doc_embeds"], transpose_b=True
... ),
... axis=1,
... )
# 3. Forward to generator
outputs = model(
... inputs=None,
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... decoder_input_ids=input_dict["labels"],
... )
# or directly generate
generated = model.generate(
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... )
generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True)
generate
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: tf.Tensor | None = None
context_input_ids = None
context_attention_mask = None
doc_scores = None
do_deduplication = None
num_return_sequences = None
num_beams = None
n_docs = None
**model_kwargs
)
→
tf.Tensor of shape (batch_size * num_return_sequences, sequence_length)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt for the generation. If input_ids is not passed, then
context_input_ids has to be provided.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: - 1 for
tokens that are not masked, - 0 for tokens that are masked. What are attention
masks?
context_input_ids (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
context_attention_mask (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever. If the model has is not initialized with a retriever or input_ids is not given,
context_input_ids and context_attention_mask have to be provided to the forward pass. They are
returned by __call__().
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever or
input_ids is not given, doc_scores has to be provided to the forward pass. doc_scores are
returned by __call__().
do_deduplication (bool, optional) —
Whether or not to deduplicate the generations from different context documents for a given input. Has
to be set to False if used while training with distributed backend.
num_return_sequences(int, optional, defaults to 1) —
The number of independently computed returned sequences for each element in the batch. Note that this
is not the value we pass to the generator’s [generate()](/docs/transformers/v4.31.0/en/main_classes/text_generation#transformers.GenerationMixin.generate) function,
where we set num_return_sequences to num_beams.
num_beams (int, optional, defaults to 1) —
Number of beams for beam search. 1 means no beam search.
n_docs (int, optional, defaults to config.n_docs) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
kwargs (Dict[str, Any], optional) —
Additional kwargs will be passed to generate()
Returns
tf.Tensor of shape (batch_size * num_return_sequences, sequence_length)
The generated sequences. The
second dimension (sequence length) is either equal to max_length or shorter if all batches finished early
due to the eos_token_id.
Implements RAG sequence “thorough” decoding. Read the generate()` documentation
for more information on how to set other generate input parameters
TFRagTokenForGeneration
class transformers.TFRagTokenForGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (RagConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
question_encoder (TFPreTrainedModel) —
An encoder model compatible with the faiss index encapsulated by the retriever.
generator (TFPreTrainedModel) —
A seq2seq model used as the generator in the RAG architecture.
retriever (RagRetriever) —
A retriever class encapsulating a faiss index queried to obtain context documents for current inputs.
The TFRagTokenForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
A TF RAG-token model implementation. It performs RAG-token specific marginalization in the forward pass.
RAG is a sequence-to-sequence model which encapsulates two core components: a question encoder and a generator.
During a forward pass, we encode the input with the question encoder and pass it to the retriever to extract
relevant context documents. The documents are then prepended to the input. Such contextualized inputs is passed to
the generator.
The question encoder can be any autoencoding model, preferably TFDPRQuestionEncoder, and the generator can be
any seq2seq model, preferably TFBartForConditionalGeneration.
The model can be initialized with a RagRetriever for end-to-end generation or used in combination with the
outputs of a retriever in multiple steps---see examples for more details. The model is compatible any
autoencoding model as the question_encoder and any seq2seq model with language model head as the generator.
It has been tested with TFDPRQuestionEncoder as the question_encoder and TFBartForConditionalGeneration
as the generator.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Tensorflow tf.keras.Model
subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to
general usage and behavior.
The model is in a developing state as it is now fully supports in eager-mode only, and may not be exported in
SavedModel format.
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
decoder_input_ids: np.ndarray | tf.Tensor | None = None
decoder_attention_mask: np.ndarray | tf.Tensor | None = None
encoder_outputs: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
doc_scores: np.ndarray | tf.Tensor | None = None
context_input_ids: np.ndarray | tf.Tensor | None = None
context_attention_mask: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
output_retrieved: Optional[bool] = None
n_docs: Optional[int] = None
do_marginalize: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
reduce_loss: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. RagConfig, used to initialize the model, specifies
which generator to use, it also specifies a compatible generator tokenizer. Use that tokenizer class to
obtain the indices.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_outputs (tuple(tuple(tf.Tensor), optional) —
Tuple consists of (generator_enc_last_hidden_state, optional: generator_enc_hidden_states,
optional: generator_enc_attentions). generator_enc_last_hidden_state of shape (batch_size, n_docs * sequence_length, hidden_size) is a sequence of hidden-states at the output of the last layer of the
generator’s encoder.
Used by the (TFRagModel) model during decoding.
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Provide for generation tasks. None by default, construct as per instructions for the generator model
you’re using with your RAG instance.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
past_key_values (tuple(tuple(tf.Tensor))) —
Tuple consists of two elements: encoder_outputs of the RAG model (see encoder_outputs) and
past_key_values of the underlying generator. Can be used to speed up decoding. past_key_values are used
in the (RagTokenForGeneration) model during decoding.
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state. If the model has is not initialized with a retriever doc_scores
has to be provided to the forward pass. doc_scores can be computed via
question_encoder_last_hidden_state and retrieved_doc_embeds, see examples for more information.
context_input_ids (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever `context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__(). context_attention_mask
(tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when
output_retrieved=True): Attention mask post-processed from the retrieved documents and the question
encoder input_ids by the retriever.
If the model has is not initialized with a retriever context_attention_mask has to be provided to the
forward pass. context_attention_mask are returned by __call__().
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
output_retrieved(bool, optional) —
Whether or not to return the retrieved_doc_embeds, retrieved_doc_ids, context_input_ids and
context_attention_mask. See returned tensors for more detail.
return_dict (bool, optional) —
Whether or not to return a TFRetrievAugLMOutput instead of a plain tuple.
n_docs (int, optional, defaults to `config.n_docs“) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
do_marginalize (bool, optional) —
If True, the logits are marginalized over all documents by making use of
torch.nn.functional.log_softmax.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss according to Rag-Token model formulation See
https://arxiv.org/pdf/2005.11401.pdf Section 2.1 for details about Rag-Token formulation. Indices should be
in [0, ..., config.vocab_size - 1].
reduce_loss (bool, optional) —
Only relevant if labels is passed. If True, the NLL loss is reduced using the tf.Tensor.sum
operation.
kwargs (Dict[str, any], optional, defaults to {}) —
Legacy dictionary, which is required so that model can use generate() function.
Returns
transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput or tuple(tf.Tensor)
A transformers.models.rag.modeling_tf_rag.TFRetrievAugLMMarginOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RagConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head. The score is possibly marginalized over all documents for
each vocabulary token.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains precomputed hidden-states (key and values in the attention blocks) of the decoder that can be used
(see past_key_values input) to speed up sequential decoding.
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) — Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
retrieved_doc_embeds (tf.Tensor of shape (batch_size, config.n_docs, hidden_size), optional, returned when output_retrieved=True) — Embedded documents retrieved by the retriever. Is used with question_encoder_last_hidden_state to compute
the doc_scores.
retrieved_doc_ids (tf.Tensor (int32) of shape (batch_size, config.n_docs), optional, returned when output_retrieved=True) — The indexes of the embedded documents retrieved by the retriever.
context_input_ids (tf.Tensor(int32) of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Input ids post-processed from the retrieved documents and the question encoder input_ids by the retriever.
context_attention_mask (tf.Tensor (int32) of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) — Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
question_encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden states at the output of the last layer of the question encoder pooled output of the
model.
question_enc_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the question encoder at the output of each layer plus the initial embedding outputs.
question_enc_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the question encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_enc_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the generator encoder of the model.
generator_enc_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the generator encoder at the output of each layer plus the initial embedding outputs.
generator_enc_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator encoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
generator_dec_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings and one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden states of the generator decoder at the output of each layer plus the initial embedding outputs.
generator_dec_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the generator decoder, after the attention softmax, used to compute the weighted
average in the self-attention heads.
The TFRagTokenForGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import tensorflow as tf
from transformers import AutoTokenizer, RagRetriever, TFRagTokenForGeneration
tokenizer = AutoTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained(
... "facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True
... )
# initialize with RagRetriever to do everything in one forward call
model = TFRagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever, from_pt=True)
input_dict = tokenizer.prepare_seq2seq_batch(
... "How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="tf"
... )
outputs = model(input_dict, output_retrieved=True)
# or use retriever separately
# 1. Encode
input_ids = input_dict["input_ids"]
question_hidden_states = model.question_encoder(input_ids)[0]
# 2. Retrieve
docs_dict = retriever(input_ids.numpy(), question_hidden_states.numpy(), return_tensors="tf")
doc_scores = tf.squeeze(
... tf.matmul(
... tf.expand_dims(question_hidden_states, axis=1), docs_dict["retrieved_doc_embeds"], transpose_b=True
... ),
... axis=1,
... )
# 3. Forward to generator
outputs = model(
... inputs=None,
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... decoder_input_ids=input_dict["labels"],
... )
# or directly generate
generated = model.generate(
... context_input_ids=docs_dict["context_input_ids"],
... context_attention_mask=docs_dict["context_attention_mask"],
... doc_scores=doc_scores,
... )
generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True)
generate
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: tf.Tensor | None = None
context_input_ids = None
context_attention_mask = None
doc_scores = None
n_docs = None
generation_config = None
logits_processor = []
**kwargs
)
→
tf.Tensor of shape (batch_size * num_return_sequences, sequence_length)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt for the generation. If input_ids is not passed, then
context_input_ids has to be provided.
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
context_input_ids (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Input IDs post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever, context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__().
context_attention_mask (tf.Tensor of shape (batch_size * config.n_docs, config.max_combined_length), optional, returned when output_retrieved=True) —
Attention mask post-processed from the retrieved documents and the question encoder input_ids by the
retriever.
If the model has is not initialized with a retriever, context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__().
doc_scores (tf.Tensor of shape (batch_size, config.n_docs)) —
Score between each retrieved document embeddings (see retrieved_doc_embeds) and
question_encoder_last_hidden_state.
If the model has is not initialized with a retriever, context_input_ids has to be provided to the
forward pass. context_input_ids are returned by __call__().
n_docs (int, optional, defaults to config.n_docs) —
Number of documents to retrieve and/or number of documents for which to generate an answer.
generation_config (~generation.GenerationConfig, optional) —
The generation configuration to be used as base parametrization for the generation call. **kwargs
passed to generate matching the attributes of generation_config will override them. If
generation_config is not provided, the default will be used, which had the following loading
priority: 1) from the generation_config.json model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit GenerationConfig’s
default values, whose documentation should be checked to parameterize generation.
logits_processor (TFLogitsProcessorList, optional) —
Custom logits processors that complement the default logits processors built from arguments and a
model’s config. If a logit processor is passed that is already created with the arguments or a model’s
config an error is thrown.
kwargs (Dict[str, Any], optional) —
Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be
forwarded to the forward function of the model.
Returns
tf.Tensor of shape (batch_size * num_return_sequences, sequence_length)
The generated sequences. The
second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early
due to the eos_token_id.
Implements TFRAG token decoding.
←QDQBert
REALM→
RAG
Overview
RagConfig
RagTokenizer
Rag specific outputs
RagRetriever
RagModel
RagSequenceForGeneration
RagTokenForGeneration
TFRagModel
TFRagSequenceForGeneration
TFRagTokenForGeneration
|
Splinter
Overview
The Splinter model was proposed in Few-Shot Question Answering by Pretraining Span Selection by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter
is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus
comprising Wikipedia and the Toronto Book Corpus.
The abstract from the paper is the following:
In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order
of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred
training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between
current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question
answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all
recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans
are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select
the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD
with only 128 training examples), while maintaining competitive performance in the high-resource setting.
Tips:
Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize
to question representations which are used to predict the answers. This layer is called QASS, and is the default
behaviour in the SplinterForQuestionAnswering class. Therefore:
Use SplinterTokenizer (rather than BertTokenizer), as it already
contains this special token. Also, its default behavior is to use this token when two sequences are given (for
example, in the run_qa.py script).
If you plan on using Splinter outside run_qa.py, please keep in mind the question token - it might be important for
the success of your model, especially in a few-shot setting.
Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that
one also has the pretrained weights of the QASS layer (tau/splinter-base-qass and tau/splinter-large-qass) and one
doesn’t (tau/splinter-base and tau/splinter-large). This is done to support randomly initializing this layer at
fine-tuning, as it is shown to yield better results for some cases in the paper.
This model was contributed by yuvalkirstain and oriram. The original code can be found here.
Documentation resources
Question answering task guide
SplinterConfig
class transformers.SplinterConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
use_cache = True
pad_token_id = 0
question_token_id = 104
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Splinter model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling SplinterModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling SplinterModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
question_token_id (int, optional, defaults to 104) —
The id of the [QUESTION] token.
This is the configuration class to store the configuration of a SplinterModel. It is used to instantiate an
Splinter model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the Splinter
tau/splinter-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SplinterModel, SplinterConfig
# Initializing a Splinter tau/splinter-base style configuration
configuration = SplinterConfig()
# Initializing a model from the tau/splinter-base style configuration
model = SplinterModel(configuration)
# Accessing the model configuration
configuration = model.config
SplinterTokenizer
class transformers.SplinterTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
question_token = '[QUESTION]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
question_token (str, optional, defaults to "[QUESTION]") —
The token used for constructing question representations.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a Splinter tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
The question token IDs if pad_on_right, else context tokens IDs
token_ids_1 (List[int], optional) —
The context token IDs if pad_on_right, else question token IDs
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a pair of sequence for question answering tasks by concatenating and adding special
tokens. A Splinter sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences for question answering: [CLS] question_tokens [QUESTION] . [SEP] context_tokens [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) — The first tokenized sequence.
token_ids_1 (List[int], optional) — The second tokenized sequence.
Returns
List[int]
The token type ids.
Create the token type IDs corresponding to the sequences passed. What are token type
IDs?
Should be overridden in a subclass if the model has a special way of building those.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
SplinterTokenizerFast
class transformers.SplinterTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
question_token = '[QUESTION]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
question_token (str, optional, defaults to "[QUESTION]") —
The token used for constructing question representations.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” Splinter tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
The question token IDs if pad_on_right, else context tokens IDs
token_ids_1 (List[int], optional) —
The context token IDs if pad_on_right, else question token IDs
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a pair of sequence for question answering tasks by concatenating and adding special
tokens. A Splinter sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences for question answering: [CLS] question_tokens [QUESTION] . [SEP] context_tokens [SEP]
SplinterModel
class transformers.SplinterModel
<
source
>
(
config
)
Parameters
config (SplinterConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Splinter Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model is an encoder (with only self-attention) following the architecture described in Attention is all you
need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SplinterConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The SplinterModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SplinterModel
import torch
tokenizer = AutoTokenizer.from_pretrained("tau/splinter-base")
model = SplinterModel.from_pretrained("tau/splinter-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
SplinterForQuestionAnswering
class transformers.SplinterForQuestionAnswering
<
source
>
(
config
)
Parameters
config (SplinterConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Splinter Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
question_positions: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape batch_size, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape batch_size, sequence_length, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
question_positions (torch.LongTensor of shape (batch_size, num_questions), optional) —
The positions of all question tokens. If given, start_logits and end_logits will be of shape (batch_size, num_questions, sequence_length). If None, the first question token in each sequence in the batch will be
the only one for which start_logits and end_logits are calculated and they will be of shape (batch_size, sequence_length).
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SplinterConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SplinterForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, SplinterForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("tau/splinter-base")
model = SplinterForQuestionAnswering.from_pretrained("tau/splinter-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
SplinterForPreTraining
class transformers.SplinterForPreTraining
<
source
>
(
config
)
Parameters
config (SplinterConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Splinter Model for the recurring span selection task as done during the pretraining. The difference to the QA task
is that we do not have a question, but multiple question tokens that replace the occurrences of recurring spans
instead.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
question_positions: typing.Optional[torch.LongTensor] = None
)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_questions, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape batch_size, num_questions, sequence_length, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape batch_size, num_questions, sequence_length, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape batch_size, num_questions, sequence_length, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_questions, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size, num_questions), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size, num_questions), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
question_positions (torch.LongTensor of shape (batch_size, num_questions), optional) —
The positions of all question tokens. If given, start_logits and end_logits will be of shape (batch_size, num_questions, sequence_length). If None, the first question token in each sequence in the batch will be
the only one for which start_logits and end_logits are calculated and they will be of shape (batch_size, sequence_length).
The SplinterForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←RWKV
SqueezeBERT→
Splinter
Overview
Documentation resources
SplinterConfig
SplinterTokenizer
SplinterTokenizerFast
SplinterModel
SplinterForQuestionAnswering
SplinterForPreTraining
|
LUKE
Overview
The LUKE model was proposed in LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto.
It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps
improve performance on various downstream tasks involving reasoning about entities such as named entity recognition,
extractive and cloze-style question answering, entity typing, and relation classification.
The abstract from the paper is the following:
Entity representations are useful in natural language tasks involving entities. In this paper, we propose new
pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed
model treats words and entities in a given text as independent tokens, and outputs contextualized representations of
them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves
predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also
propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the
transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model
achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains
state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification),
CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question
answering).
Tips:
This implementation is the same as RobertaModel with the addition of entity embeddings as well
as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities.
LUKE treats entities as input tokens; therefore, it takes entity_ids, entity_attention_mask,
entity_token_type_ids and entity_position_ids as extra input. You can obtain those using
LukeTokenizer.
LukeTokenizer takes entities and entity_spans (character-based start and end
positions of the entities in the input text) as extra input. entities typically consist of [MASK] entities or
Wikipedia entities. The brief description when inputting these entities are as follows:
Inputting [MASK] entities to compute entity representations: The [MASK] entity is used to mask entities to be
predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by
gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address
downstream tasks requiring the information of entities in text such as entity typing, relation classification, and
named entity recognition.
Inputting Wikipedia entities to compute knowledge-enhanced token representations: LUKE learns rich information
(or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By
using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in
the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as
question answering.
There are three head models for the former use case:
LukeForEntityClassification, for tasks to classify a single entity in an input text such as
entity typing, e.g. the Open Entity dataset.
This model places a linear head on top of the output entity representation.
LukeForEntityPairClassification, for tasks to classify the relationship between two entities
such as relation classification, e.g. the TACRED dataset. This
model places a linear head on top of the concatenated output representation of the pair of given entities.
LukeForEntitySpanClassification, for tasks to classify the sequence of entity spans, such as
named entity recognition (NER). This model places a linear head on top of the output entity representations. You
can address NER using this model by inputting all possible entity spans in the text to the model.
LukeTokenizer has a task argument, which enables you to easily create an input to these
head models by specifying task="entity_classification", task="entity_pair_classification", or
task="entity_span_classification". Please refer to the example code of each head models.
A demo notebook on how to fine-tune LukeForEntityPairClassification for relation
classification can be found here.
There are also 3 notebooks available, which showcase how you can reproduce the results as reported in the paper with
the HuggingFace implementation of LUKE. They can be found here.
Example:
Copied
from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification
model = LukeModel.from_pretrained("studio-ousia/luke-base")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base")
# Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncé"
text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé"
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**inputs)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state
# Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations
entities = [
... "Beyoncé",
... "Los Angeles",
... ] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles"
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**inputs)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state
# Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = int(logits[0].argmax())
print("Predicted class:", model.config.id2label[predicted_class_idx])
This model was contributed by ikuyamada and nielsr. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
LukeConfig
class transformers.LukeConfig
<
source
>
(
vocab_size = 50267
entity_vocab_size = 500000
hidden_size = 768
entity_emb_size = 256
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
use_entity_aware_attention = True
classifier_dropout = None
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the LUKE model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling LukeModel.
entity_vocab_size (int, optional, defaults to 500000) —
Entity vocabulary size of the LUKE model. Defines the number of different entities that can be represented
by the entity_ids passed when calling LukeModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
entity_emb_size (int, optional, defaults to 256) —
The number of dimensions of the entity embedding.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling LukeModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
use_entity_aware_attention (bool, defaults to True) —
Whether or not the model should use the entity-aware self-attention mechanism proposed in LUKE: Deep
Contextualized Entity Representations with Entity-aware Self-attention (Yamada et
al.).
classifier_dropout (float, optional) —
The dropout ratio for the classification head.
This is the configuration class to store the configuration of a LukeModel. It is used to instantiate a LUKE
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the LUKE
studio-ousia/luke-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import LukeConfig, LukeModel
# Initializing a LUKE configuration
configuration = LukeConfig()
# Initializing a model from the configuration
model = LukeModel(configuration)
# Accessing the model configuration
configuration = model.config
LukeTokenizer
class transformers.LukeTokenizer
<
source
>
(
vocab_file
merges_file
entity_vocab_file
task = None
max_entity_length = 32
max_mention_length = 30
entity_token_1 = '<ent>'
entity_token_2 = '<ent2>'
entity_unk_token = '[UNK]'
entity_pad_token = '[PAD]'
entity_mask_token = '[MASK]'
entity_mask2_token = '[MASK2]'
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
entity_vocab_file (str) —
Path to the entity vocabulary file.
task (str, optional) —
Task for which you want to prepare sequences. One of "entity_classification",
"entity_pair_classification", or "entity_span_classification". If you specify this argument, the entity
sequence is automatically created based on the given entity span(s).
max_entity_length (int, optional, defaults to 32) —
The maximum length of entity_ids.
max_mention_length (int, optional, defaults to 30) —
The maximum number of tokens inside an entity span.
entity_token_1 (str, optional, defaults to <ent>) —
The special token used to represent an entity span in a word token sequence. This token is only used when
task is set to "entity_classification" or "entity_pair_classification".
entity_token_2 (str, optional, defaults to <ent2>) —
The special token used to represent an entity span in a word token sequence. This token is only used when
task is set to "entity_pair_classification".
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (LUKE tokenizer detect beginning of words by the preceding space).
Constructs a LUKE tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import LukeTokenizer
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base")
tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods. It also creates entity sequences, namely
entity_ids, entity_attention_mask, entity_token_type_ids, and entity_position_ids to be used by the LUKE
model.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str]]
text_pair: typing.Union[str, typing.List[str], NoneType] = None
entity_spans: typing.Union[typing.List[typing.Tuple[int, int]], typing.List[typing.List[typing.Tuple[int, int]]], NoneType] = None
entity_spans_pair: typing.Union[typing.List[typing.Tuple[int, int]], typing.List[typing.List[typing.Tuple[int, int]]], NoneType] = None
entities: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
entities_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
max_entity_length: typing.Optional[int] = None
stride: int = 0
is_split_into_words: typing.Optional[bool] = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence must be a string. Note that this
tokenizer does not support tokenization based on pretokenized strings.
text_pair (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence must be a string. Note that this
tokenizer does not support tokenization based on pretokenized strings.
entity_spans (List[Tuple[int, int]], List[List[Tuple[int, int]]], optional) —
The sequence or batch of sequences of entity spans to be encoded. Each sequence consists of tuples each
with two integers denoting character-based start and end positions of entities. If you specify
"entity_classification" or "entity_pair_classification" as the task argument in the constructor,
the length of each sequence must be 1 or 2, respectively. If you specify entities, the length of each
sequence must be equal to the length of each sequence of entities.
entity_spans_pair (List[Tuple[int, int]], List[List[Tuple[int, int]]], optional) —
The sequence or batch of sequences of entity spans to be encoded. Each sequence consists of tuples each
with two integers denoting character-based start and end positions of entities. If you specify the
task argument in the constructor, this argument is ignored. If you specify entities_pair, the
length of each sequence must be equal to the length of each sequence of entities_pair.
entities (List[str], List[List[str]], optional) —
The sequence or batch of sequences of entities to be encoded. Each sequence consists of strings
representing entities, i.e., special entities (e.g., [MASK]) or entity titles of Wikipedia (e.g., Los
Angeles). This argument is ignored if you specify the task argument in the constructor. The length of
each sequence must be equal to the length of each sequence of entity_spans. If you specify
entity_spans without specifying this argument, the entity sequence or the batch of entity sequences
is automatically constructed by filling it with the [MASK] entity.
entities_pair (List[str], List[List[str]], optional) —
The sequence or batch of sequences of entities to be encoded. Each sequence consists of strings
representing entities, i.e., special entities (e.g., [MASK]) or entity titles of Wikipedia (e.g., Los
Angeles). This argument is ignored if you specify the task argument in the constructor. The length of
each sequence must be equal to the length of each sequence of entity_spans_pair. If you specify
entity_spans_pair without specifying this argument, the entity sequence or the batch of entity
sequences is automatically constructed by filling it with the [MASK] entity.
max_entity_length (int, optional) —
The maximum length of entity_ids.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
entity_ids — List of entity ids to be fed to a model.
What are input IDs?
entity_position_ids — List of entity positions in the input sequence to be fed to a model.
entity_token_type_ids — List of entity token type ids to be fed to a model (when
return_token_type_ids=True or if “entity_token_type_ids” is in self.model_input_names).
What are token type IDs?
entity_attention_mask — List of indices specifying which entities should be attended to by the model
(when return_attention_mask=True or if “entity_attention_mask” is in self.model_input_names).
What are attention masks?
entity_start_positions — List of the start positions of entities in the word token sequence (when
task="entity_span_classification").
entity_end_positions — List of the end positions of entities in the word token sequence (when
task="entity_span_classification").
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences, depending on the task you want to prepare them for.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
LukeModel
class transformers.LukeModel
<
source
>
(
config: LukeConfig
add_pooling_layer: bool = True
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare LUKE model transformer outputting raw hidden-states for both word tokens and entities without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.BaseLukeModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.luke.modeling_luke.BaseLukeModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.BaseLukeModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
entity_last_hidden_state (torch.FloatTensor of shape (batch_size, entity_length, hidden_size)) — Sequence of entity hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length + entity_length, sequence_length + entity_length). Attentions weights after the attention softmax, used to
compute the weighted average in the self-attention heads.
The LukeModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LukeModel
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base")
model = LukeModel.from_pretrained("studio-ousia/luke-base")
# Compute the contextualized entity representation corresponding to the entity mention "Beyoncé"
text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé"
encoding = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**encoding)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state
# Input Wikipedia entities to obtain enriched contextualized representations of word tokens
text = "Beyoncé lives in Los Angeles."
entities = [
... "Beyoncé",
... "Los Angeles",
... ] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles"
entity_spans = [
... (0, 7),
... (17, 28),
... ] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
encoding = tokenizer(
... text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt"
... )
outputs = model(**encoding)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state
LukeForMaskedLM
class transformers.LukeForMaskedLM
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE model with a language modeling head and entity prediction head on top for masked language modeling and
masked entity prediction.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.LongTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
labels: typing.Optional[torch.LongTensor] = None
entity_labels: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.LukeMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
entity_labels (torch.LongTensor of shape (batch_size, entity_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.models.luke.modeling_luke.LukeMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.LukeMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — The sum of masked language modeling (MLM) loss and entity prediction loss.
mlm_loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
mep_loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked entity prediction (MEP) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
entity_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the entity prediction head (scores for each entity vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LukeForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
LukeForEntityClassification
class transformers.LukeForEntityClassification
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE model with a classification head on top (a linear layer on top of the hidden state of the first entity
token) for entity classification tasks, such as Open Entity.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.EntityClassificationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,) or (batch_size, num_labels), optional) —
Labels for computing the classification loss. If the shape is (batch_size,), the cross entropy loss is
used for the single-label classification. In this case, labels should contain the indices that should be in
[0, ..., config.num_labels - 1]. If the shape is (batch_size, num_labels), the binary cross entropy
loss is used for the multi-label classification. In this case, labels should only contain [0, 1], where 0
and 1 indicate false and true, respectively.
Returns
transformers.models.luke.modeling_luke.EntityClassificationOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.EntityClassificationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The LukeForEntityClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LukeForEntityClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-open-entity")
model = LukeForEntityClassification.from_pretrained("studio-ousia/luke-large-finetuned-open-entity")
text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé"
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Predicted class: person
LukeForEntityPairClassification
class transformers.LukeForEntityPairClassification
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE model with a classification head on top (a linear layer on top of the hidden states of the two entity
tokens) for entity pair classification tasks, such as TACRED.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.EntityPairClassificationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,) or (batch_size, num_labels), optional) —
Labels for computing the classification loss. If the shape is (batch_size,), the cross entropy loss is
used for the single-label classification. In this case, labels should contain the indices that should be in
[0, ..., config.num_labels - 1]. If the shape is (batch_size, num_labels), the binary cross entropy
loss is used for the multi-label classification. In this case, labels should only contain [0, 1], where 0
and 1 indicate false and true, respectively.
Returns
transformers.models.luke.modeling_luke.EntityPairClassificationOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.EntityPairClassificationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The LukeForEntityPairClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LukeForEntityPairClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
text = "Beyoncé lives in Los Angeles."
entity_spans = [
... (0, 7),
... (17, 28),
... ] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Predicted class: per:cities_of_residence
LukeForEntitySpanClassification
class transformers.LukeForEntitySpanClassification
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE model with a span classification head on top (a linear layer on top of the hidden states output) for tasks
such as named entity recognition.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.LongTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
entity_start_positions: typing.Optional[torch.LongTensor] = None
entity_end_positions: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.EntitySpanClassificationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
entity_start_positions (torch.LongTensor) —
The start positions of entities in the word token sequence.
entity_end_positions (torch.LongTensor) —
The end positions of entities in the word token sequence.
labels (torch.LongTensor of shape (batch_size, entity_length) or (batch_size, entity_length, num_labels), optional) —
Labels for computing the classification loss. If the shape is (batch_size, entity_length), the cross
entropy loss is used for the single-label classification. In this case, labels should contain the indices
that should be in [0, ..., config.num_labels - 1]. If the shape is (batch_size, entity_length, num_labels), the binary cross entropy loss is used for the multi-label classification. In this case,
labels should only contain [0, 1], where 0 and 1 indicate false and true, respectively.
Returns
transformers.models.luke.modeling_luke.EntitySpanClassificationOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.EntitySpanClassificationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, entity_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The LukeForEntitySpanClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, LukeForEntitySpanClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003")
model = LukeForEntitySpanClassification.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003")
text = "Beyoncé lives in Los Angeles"
# List all possible entity spans in the text
word_start_positions = [0, 8, 14, 17, 21] # character-based start positions of word tokens
word_end_positions = [7, 13, 16, 20, 28] # character-based end positions of word tokens
entity_spans = []
for i, start_pos in enumerate(word_start_positions):
... for end_pos in word_end_positions[i:]:
... entity_spans.append((start_pos, end_pos))
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_indices = logits.argmax(-1).squeeze().tolist()
for span, predicted_class_idx in zip(entity_spans, predicted_class_indices):
... if predicted_class_idx != 0:
... print(text[span[0] : span[1]], model.config.id2label[predicted_class_idx])
Beyoncé PER
Los Angeles LOC
LukeForSequenceClassification
class transformers.LukeForSequenceClassification
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.LukeSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.luke.modeling_luke.LukeSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.LukeSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LukeForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, LukeForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base")
model = LukeForSequenceClassification.from_pretrained("studio-ousia/luke-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = LukeForSequenceClassification.from_pretrained("studio-ousia/luke-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, LukeForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base")
model = LukeForSequenceClassification.from_pretrained("studio-ousia/luke-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = LukeForSequenceClassification.from_pretrained(
... "studio-ousia/luke-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
LukeForMultipleChoice
class transformers.LukeForMultipleChoice
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.LukeMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.models.luke.modeling_luke.LukeMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.LukeMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LukeForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LukeForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base")
model = LukeForMultipleChoice.from_pretrained("studio-ousia/luke-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
LukeForTokenClassification
class transformers.LukeForTokenClassification
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE Model with a token classification head on top (a linear layer on top of the hidden-states output). To
solve Named-Entity Recognition (NER) task using LUKE, LukeForEntitySpanClassification is more suitable than this
class.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.LukeTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.models.luke.modeling_luke.LukeTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.LukeTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LukeForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LukeForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base")
model = LukeForTokenClassification.from_pretrained("studio-ousia/luke-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
LukeForQuestionAnswering
class transformers.LukeForQuestionAnswering
<
source
>
(
config
)
Parameters
config (LukeConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The LUKE Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.FloatTensor] = None
entity_ids: typing.Optional[torch.LongTensor] = None
entity_attention_mask: typing.Optional[torch.FloatTensor] = None
entity_token_type_ids: typing.Optional[torch.LongTensor] = None
entity_position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.luke.modeling_luke.LukeQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
entity_ids (torch.LongTensor of shape (batch_size, entity_length)) —
Indices of entity tokens in the entity vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
entity_attention_mask (torch.FloatTensor of shape (batch_size, entity_length), optional) —
Mask to avoid performing attention on padding entity token indices. Mask values selected in [0, 1]:
1 for entity tokens that are not masked,
0 for entity tokens that are masked.
entity_token_type_ids (torch.LongTensor of shape (batch_size, entity_length), optional) —
Segment token indices to indicate first and second portions of the entity token inputs. Indices are
selected in [0, 1]:
0 corresponds to a portion A entity token,
1 corresponds to a portion B entity token.
entity_position_ids (torch.LongTensor of shape (batch_size, entity_length, max_mention_length), optional) —
Indices of positions of each input entity in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.models.luke.modeling_luke.LukeQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.models.luke.modeling_luke.LukeQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (LukeConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
entity_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, entity_length, hidden_size). Entity hidden-states of the model at the output of each
layer plus the initial entity embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The LukeForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, LukeForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base")
model = LukeForQuestionAnswering.from_pretrained("studio-ousia/luke-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←LongT5
M2M100→
LUKE
Overview
Documentation resources
LukeConfig
LukeTokenizer
LukeModel
LukeForMaskedLM
LukeForEntityClassification
LukeForEntityPairClassification
LukeForEntitySpanClassification
LukeForSequenceClassification
LukeForMultipleChoice
LukeForTokenClassification
LukeForQuestionAnswering
|
GPT-J
Overview
The GPT-J model was released in the kingoflolz/mesh-transformer-jax repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like
causal language model trained on the Pile dataset.
This model was contributed by Stella Biderman.
Tips:
To load GPT-J in float32 one would need at least 2x model size
RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB
RAM to just load the model. To reduce the RAM usage there are a few options. The torch_dtype argument can be
used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights,
which could be used to further minimize the RAM usage:
Copied
from transformers import GPTJForCausalLM
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained(
... "EleutherAI/gpt-j-6B",
... revision="float16",
... torch_dtype=torch.float16,
... ).to(device)
The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam
optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients.
So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This
is not including the activations and data batches, which would again require some more GPU RAM. So one should explore
solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to
train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for
that could be found here
Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab
size, the tokenizer for GPT-J contains 143 extra tokens
<|extratoken_1|>... <|extratoken_143|>, so the vocab_size of tokenizer also becomes 50400.
Generation
The generate() method can be used to generate text using GPT-J
model.
Copied
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the "
... "researchers was the fact that the unicorns spoke perfect English."
... )
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
gen_text = tokenizer.batch_decode(gen_tokens)[0]
…or in float16 precision:
Copied
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
device = "cuda"
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
... "previously unexplored valley, in the Andes Mountains. Even more surprising to the "
... "researchers was the fact that the unicorns spoke perfect English."
... )
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(
... input_ids,
... do_sample=True,
... temperature=0.9,
... max_length=100,
... )
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT-J. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Generation
Description of GPT-J.
A blog on how to Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker.
A blog on how to Accelerate GPT-J inference with DeepSpeed-Inference on GPUs.
A blog post introducing GPT-J-6B: 6B JAX-Based Transformer. 🌎
A notebook for GPT-J-6B Inference Demo. 🌎
Another notebook demonstrating Inference with GPT-J-6B.
Causal language modeling chapter of the 🤗 Hugging Face Course.
GPTJForCausalLM is supported by this causal language modeling example script, text generation example script, and notebook.
TFGPTJForCausalLM is supported by this causal language modeling example script and notebook.
FlaxGPTJForCausalLM is supported by this causal language modeling example script and notebook.
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
GPTJConfig
class transformers.GPTJConfig
<
source
>
(
vocab_size = 50400
n_positions = 2048
n_embd = 4096
n_layer = 28
n_head = 16
rotary_dim = 64
n_inner = None
activation_function = 'gelu_new'
resid_pdrop = 0.0
embd_pdrop = 0.0
attn_pdrop = 0.0
layer_norm_epsilon = 1e-05
initializer_range = 0.02
use_cache = True
bos_token_id = 50256
eos_token_id = 50256
tie_word_embeddings = False
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50400) —
Vocabulary size of the GPT-J model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling GPTJModel.
n_positions (int, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 4096) —
Dimensionality of the embeddings and hidden states.
n_layer (int, optional, defaults to 28) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
rotary_dim (int, optional, defaults to 64) —
Number of dimensions in the embedding that Rotary Position Embedding is applied to.
n_inner (int, optional, defaults to None) —
Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd
activation_function (str, optional, defaults to "gelu_new") —
Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"].
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (int, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a GPTJModel. It is used to instantiate a GPT-J
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the GPT-J
EleutherAI/gpt-j-6B architecture. Configuration objects inherit from
PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig
for more information.
Example:
Copied
from transformers import GPTJModel, GPTJConfig
# Initializing a GPT-J 6B configuration
configuration = GPTJConfig()
# Initializing a model from the configuration
model = GPTJModel(configuration)
# Accessing the model configuration
configuration = model.config
GPTJModel
class transformers.GPTJModel
<
source
>
(
config
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTJConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTJModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-j-6B instead of hf-internal-testing/tiny-random-gptj. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
from transformers import AutoTokenizer, GPTJModel
import torch
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gptj")
model = GPTJModel.from_pretrained("hf-internal-testing/tiny-random-gptj")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
GPTJForCausalLM
class transformers.GPTJForCausalLM
<
source
>
(
config
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-J Model transformer with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTJConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTJForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-j-6B instead of hf-internal-testing/tiny-random-gptj. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
import torch
from transformers import AutoTokenizer, GPTJForCausalLM
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gptj")
model = GPTJForCausalLM.from_pretrained("hf-internal-testing/tiny-random-gptj")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
GPTJForSequenceClassification
class transformers.GPTJForSequenceClassification
<
source
>
(
config
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-J Model transformer with a sequence classification head on top (linear layer).
GPTJForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT, GPT-2, GPT-Neo) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTJConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTJForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-j-6B instead of ydshieh/tiny-random-gptj-for-sequence-classification. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPTJForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification")
model = GPTJForSequenceClassification.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPTJForSequenceClassification.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, GPTJForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification")
model = GPTJForSequenceClassification.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = GPTJForSequenceClassification.from_pretrained(
... "ydshieh/tiny-random-gptj-for-sequence-classification", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
GPTJForQuestionAnswering
class transformers.GPTJForQuestionAnswering
<
source
>
(
config
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTJConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The GPTJForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
This example uses a random model as the real ones are all very big. To get proper results, you should use
EleutherAI/gpt-j-6B instead of hf-internal-testing/tiny-random-gptj. If you get out-of-memory when loading that checkpoint, you can try
adding device_map="auto" in the from_pretrained call.
Example:
Copied
from transformers import AutoTokenizer, GPTJForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gptj")
model = GPTJForQuestionAnswering.from_pretrained("hf-internal-testing/tiny-random-gptj")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFGPTJModel
class transformers.TFGPTJModel
<
source
>
(
*args
**kwargs
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states). Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPTJConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGPTJModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPTJModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = TFGPTJModel.from_pretrained("EleutherAI/gpt-j-6B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFGPTJForCausalLM
class transformers.TFGPTJForCausalLM
<
source
>
(
*args
**kwargs
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-J Model transformer with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states). Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPTJConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGPTJForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPTJForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = TFGPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFGPTJForSequenceClassification
class transformers.TFGPTJForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-J Model transformer with a sequence classification head on top (linear layer).
GPTJForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT, GPT-2, GPT-Neo) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
labels: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states). Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (np.ndarray or tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPTJConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGPTJForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPTJForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = TFGPTJForSequenceClassification.from_pretrained("EleutherAI/gpt-j-6B")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFGPTJForSequenceClassification.from_pretrained("EleutherAI/gpt-j-6B", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFGPTJForQuestionAnswering
class transformers.TFGPTJForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states). Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (np.ndarray or tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (np.ndarray or tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (GPTJConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFGPTJForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFGPTJForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = TFGPTJForQuestionAnswering.from_pretrained("EleutherAI/gpt-j-6B")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
FlaxGPTJModel
class transformers.FlaxGPTJModel
<
source
>
(
config: GPTJConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare GPTJ Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTJConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxGPTJPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxGPTJModel
tokenizer = AutoTokenizer.from_pretrained("gptj")
model = FlaxGPTJModel.from_pretrained("gptj")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxGPTJForCausalLM
class transformers.FlaxGPTJForCausalLM
<
source
>
(
config: GPTJConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (GPTJConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The GPTJ Model transformer with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
position_ids = None
params: dict = None
past_key_values: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (GPTJConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxGPTJPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxGPTJForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gptj")
model = FlaxGPTJForCausalLM.from_pretrained("gptj")
inputs = tokenizer("Hello, my dog is cute", return_tensors="np")
outputs = model(**inputs)
# retrieve logts for next token
next_token_logits = outputs.logits[:, -1]
←GPT NeoX Japanese
GPT2→
GPT-J
Overview
Generation
Resources
GPTJConfig
GPTJModel
GPTJForCausalLM
GPTJForSequenceClassification
GPTJForQuestionAnswering
TFGPTJModel
TFGPTJForCausalLM
TFGPTJForSequenceClassification
TFGPTJForQuestionAnswering
FlaxGPTJModel
FlaxGPTJForCausalLM
|
Graphormer
Overview
The Graphormer model was proposed in Do Transformers Really Perform Bad for Graph Representation? by
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention.
The abstract from the paper is the following:
The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.
Tips:
This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode.
You can reduce the batch size, increase your RAM, or decrease the UNREACHABLE_NODE_DISTANCE parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges.
This model does not use a tokenizer, but instead a special collator during training.
This model was contributed by clefourrier. The original code can be found here.
GraphormerConfig
class transformers.GraphormerConfig
<
source
>
(
num_classes: int = 1
num_atoms: int = 4608
num_edges: int = 1536
num_in_degree: int = 512
num_out_degree: int = 512
num_spatial: int = 512
num_edge_dis: int = 128
multi_hop_max_dist: int = 5
spatial_pos_max: int = 1024
edge_type: str = 'multi_hop'
max_nodes: int = 512
share_input_output_embed: bool = False
num_hidden_layers: int = 12
embedding_dim: int = 768
ffn_embedding_dim: int = 768
num_attention_heads: int = 32
dropout: float = 0.1
attention_dropout: float = 0.1
layerdrop: float = 0.0
encoder_normalize_before: bool = False
pre_layernorm: bool = False
apply_graphormer_init: bool = False
activation_fn: str = 'gelu'
embed_scale: float = None
freeze_embeddings: bool = False
num_trans_layers_to_freeze: int = 0
traceable: bool = False
q_noise: float = 0.0
qn_block_size: int = 8
kdim: int = None
vdim: int = None
bias: bool = True
self_attention: bool = True
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
**kwargs
)
Parameters
num_classes (int, optional, defaults to 1) —
Number of target classes or labels, set to n for binary classification of n tasks.
num_atoms (int, optional, defaults to 512*9) —
Number of node types in the graphs.
num_edges (int, optional, defaults to 512*3) —
Number of edges types in the graph.
num_in_degree (int, optional, defaults to 512) —
Number of in degrees types in the input graphs.
num_out_degree (int, optional, defaults to 512) —
Number of out degrees types in the input graphs.
num_edge_dis (int, optional, defaults to 128) —
Number of edge dis in the input graphs.
multi_hop_max_dist (int, optional, defaults to 20) —
Maximum distance of multi hop edges between two nodes.
spatial_pos_max (int, optional, defaults to 1024) —
Maximum distance between nodes in the graph attention bias matrices, used during preprocessing and
collation.
edge_type (str, optional, defaults to multihop) —
Type of edge relation chosen.
max_nodes (int, optional, defaults to 512) —
Maximum number of nodes which can be parsed for the input graphs.
share_input_output_embed (bool, optional, defaults to False) —
Shares the embedding layer between encoder and decoder - careful, True is not implemented.
num_layers (int, optional, defaults to 12) —
Number of layers.
embedding_dim (int, optional, defaults to 768) —
Dimension of the embedding layer in encoder.
ffn_embedding_dim (int, optional, defaults to 768) —
Dimension of the “intermediate” (often named feed-forward) layer in encoder.
num_attention_heads (int, optional, defaults to 32) —
Number of attention heads in the encoder.
self_attention (bool, optional, defaults to True) —
Model is self attentive (False not implemented).
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention weights.
layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
bias (bool, optional, defaults to True) —
Uses bias in the attention module - unsupported at the moment.
embed_scale(float, optional, defaults to None) —
Scaling factor for the node embeddings.
num_trans_layers_to_freeze (int, optional, defaults to 0) —
Number of transformer layers to freeze.
encoder_normalize_before (bool, optional, defaults to False) —
Normalize features before encoding the graph.
pre_layernorm (bool, optional, defaults to False) —
Apply layernorm before self attention and the feed forward network. Without this, post layernorm will be
used.
apply_graphormer_init (bool, optional, defaults to False) —
Apply a custom graphormer initialisation to the model before training.
freeze_embeddings (bool, optional, defaults to False) —
Freeze the embedding layer, or train it along the model.
encoder_normalize_before (bool, optional, defaults to False) —
Apply the layer norm before each encoder block.
q_noise (float, optional, defaults to 0.0) —
Amount of quantization noise (see “Training with Quantization Noise for Extreme Model Compression”). (For
more detail, see fairseq’s documentation on quant_noise).
qn_block_size (int, optional, defaults to 8) —
Size of the blocks for subsequent quantization with iPQ (see q_noise).
kdim (int, optional, defaults to None) —
Dimension of the key in the attention, if different from the other values.
vdim (int, optional, defaults to None) —
Dimension of the value in the attention, if different from the other values.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
traceable (bool, optional, defaults to False) —
Changes return value of the encoder’s inner_state to stacked tensors.
Example —
This is the configuration class to store the configuration of a ~GraphormerModel. It is used to instantiate an
Graphormer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Graphormer
graphormer-base-pcqm4mv1 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
GraphormerModel
class transformers.GraphormerModel
<
source
>
(
config: GraphormerConfig
)
The Graphormer model is a graph-encoder model.
It goes from a graph to its representation. If you want to use the model for a downstream classification task, use
GraphormerForGraphClassification instead. For any other downstream task, feel free to add a new class, or combine
this model with a downstream model of your choice, following the example in GraphormerForGraphClassification.
forward
<
source
>
(
input_nodes: LongTensor
input_edges: LongTensor
attn_bias: Tensor
in_degree: LongTensor
out_degree: LongTensor
spatial_pos: LongTensor
attn_edge_type: LongTensor
perturb = None
masked_tokens = None
return_dict: typing.Optional[bool] = None
**unused
)
GraphormerForGraphClassification
class transformers.GraphormerForGraphClassification
<
source
>
(
config: GraphormerConfig
)
This model can be used for graph-level classification or regression tasks.
It can be trained on
regression (by setting config.num_classes to 1); there should be one float-type label per graph
one task classification (by setting config.num_classes to the number of classes); there should be one integer
label per graph
binary multi-task classification (by setting config.num_classes to the number of labels); there should be a list
of integer labels for each graph.
forward
<
source
>
(
input_nodes: LongTensor
input_edges: LongTensor
attn_bias: Tensor
in_degree: LongTensor
out_degree: LongTensor
spatial_pos: LongTensor
attn_edge_type: LongTensor
labels: typing.Optional[torch.LongTensor] = None
return_dict: typing.Optional[bool] = None
**unused
)
←Time Series Transformer
Custom Layers and Utilities→
Graphormer
Overview
GraphormerConfig
GraphormerModel
GraphormerForGraphClassification
|
with ViLT is by checking the example notebooks
(which showcase both inference and fine-tuning on custom data).
ViLT is a model that takes both pixel_values and input_ids as input. One can use ViltProcessor to prepare data for the model.
This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one.
ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to
under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a pixel_mask that indicates
which pixel values are real and which are padding. ViltProcessor automatically creates this for you.
The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes
additional embedding layers for the language modality.
ViLT architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Tips:
The PyTorch version of this model is only available in torch 1.10 and higher.
ViltConfig
class transformers.ViltConfig
<
source
>
(
vocab_size = 30522
type_vocab_size = 2
modality_type_vocab_size = 2
max_position_embeddings = 40
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 384
patch_size = 32
num_channels = 3
qkv_bias = True
max_image_length = -1
tie_word_embeddings = False
num_images = -1
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the text part of the model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling ViltModel.
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling ViltModel. This is used when encoding
text.
modality_type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the modalities passed when calling ViltModel. This is used after concatening the
embeddings of the text and image modalities.
max_position_embeddings (int, optional, defaults to 40) —
The maximum sequence length that this model might ever be used with.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 384) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 32) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
max_image_length (int, optional, defaults to -1) —
The maximum number of patches to take as input for the Transformer encoder. If set to a positive integer,
the encoder will sample max_image_length patches at maximum. If set to -1, will not be taken into
account.
num_images (int, optional, defaults to -1) —
The number of images to use for natural language visual reasoning. If set to a positive integer, will be
used by ViltForImagesAndTextClassification for defining the classifier head.
This is the configuration class to store the configuration of a ViLTModel. It is used to instantiate an ViLT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the ViLT
dandelin/vilt-b32-mlm architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ViLTModel, ViLTConfig
# Initializing a ViLT dandelin/vilt-b32-mlm style configuration
configuration = ViLTConfig()
# Initializing a model from the dandelin/vilt-b32-mlm style configuration
model = ViLTModel(configuration)
# Accessing the model configuration
configuration = model.config
ViltFeatureExtractor
class transformers.ViltFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
ViltImageProcessor
class transformers.ViltImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
size_divisor: int = 32
resample: Resampling = <Resampling.BICUBIC: 3>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: bool = True
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 384}):
Resize the shorter side of the input to size["shortest_edge"]. The longer side will be limited to under
int((1333 / 800) * size["shortest_edge"]) while preserving the aspect ratio. Only has an effect if
do_resize is set to True. Can be overridden by the size parameter in the preprocess method.
size_divisor (int, optional, defaults to 32) —
The size by which to make sure both the height and width can be divided. Only has an effect if do_resize
is set to True. Can be overridden by the size_divisor parameter in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. Can be
overridden by the resample parameter in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Wwhether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Only has an effect if do_rescale is set to True. Can be
overridden by the rescale_factor parameter in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method. Can be overridden by the do_normalize parameter in the preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method. Can be
overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Whether to pad the image to the (max_height, max_width) of the images in the batch. Can be overridden by
the do_pad parameter in the preprocess method.
Constructs a ViLT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
size_divisor: typing.Optional[int] = None
resample: Resampling = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Controls the size of the image after resize. The shortest edge of the image is resized to
size["shortest_edge"] whilst preserving the aspect ratio. If the longest edge of this resized image
is > int(size["shortest_edge"] * (1333 / 800)), then the image is resized again to make the longest
edge equal to int(size["shortest_edge"] * (1333 / 800)).
size_divisor (int, optional, defaults to self.size_divisor) —
The image is resized to a size that is a multiple of this value.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean to normalize the image by if do_normalize is set to True.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation to normalize the image by if do_normalize is set to True.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image to the (max_height, max_width) in the batch. If True, a pixel mask is also
created and returned.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
ViltProcessor
class transformers.ViltProcessor
<
source
>
(
image_processor = None
tokenizer = None
**kwargs
)
Parameters
image_processor (ViltImageProcessor) —
An instance of ViltImageProcessor. The image processor is a required input.
tokenizer (BertTokenizerFast) —
An instance of [‘BertTokenizerFast`]. The tokenizer is a required input.
Constructs a ViLT processor which wraps a BERT tokenizer and ViLT image processor into a single processor.
ViltProcessor offers all the functionalities of ViltImageProcessor and BertTokenizerFast. See the
docstring of call() and decode() for more information.
__call__
<
source
>
(
images
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
This method uses ViltImageProcessor.call() method to prepare image(s) for the model, and
BertTokenizerFast.call() to prepare text for the model.
Please refer to the docstring of the above two methods for more information.
ViltModel
class transformers.ViltModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (ViltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViLT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
image_token_type_idx: typing.Optional[int] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViltConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViltModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import ViltProcessor, ViltModel
from PIL import Image
import requests
# prepare image and text
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "hello world"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm")
inputs = processor(image, text, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ViltForMaskedLM
class transformers.ViltForMaskedLM
<
source
>
(
config
)
Parameters
config (ViltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViLT Model with a language modeling head on top as done during pretraining.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, …,
config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, …, config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViltForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import ViltProcessor, ViltForMaskedLM
import requests
from PIL import Image
import re
import torch
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "a bunch of [MASK] laying on a [MASK]."
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
model = ViltForMaskedLM.from_pretrained("dandelin/vilt-b32-mlm")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
tl = len(re.findall("\[MASK\]", text))
inferred_token = [text]
# gradually fill in the MASK tokens, one by one
with torch.no_grad():
... for i in range(tl):
... encoded = processor.tokenizer(inferred_token)
... input_ids = torch.tensor(encoded.input_ids)
... encoded = encoded["input_ids"][0][1:-1]
... outputs = model(input_ids=input_ids, pixel_values=encoding.pixel_values)
... mlm_logits = outputs.logits[0] # shape (seq_len, vocab_size)
... # only take into account text features (minus CLS and SEP token)
... mlm_logits = mlm_logits[1 : input_ids.shape[1] - 1, :]
... mlm_values, mlm_ids = mlm_logits.softmax(dim=-1).max(dim=-1)
... # only take into account text
... mlm_values[torch.tensor(encoded) != 103] = 0
... select = mlm_values.argmax().item()
... encoded[select] = mlm_ids[select].item()
... inferred_token = [processor.decode(encoded)]
selected_token = ""
encoded = processor.tokenizer(inferred_token)
output = processor.decode(encoded.input_ids[0], skip_special_tokens=True)
print(output)
a bunch of cats laying on a couch.
ViltForQuestionAnswering
class transformers.ViltForQuestionAnswering
<
source
>
(
config
)
Parameters
config (ViltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Vilt Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the [CLS]
token) for visual question answering, e.g. for VQAv2.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.FloatTensor of shape (batch_size, num_labels), optional) —
Labels for computing the visual question answering loss. This tensor must be either a one-hot encoding of
all answers that are applicable for a given example in the batch, or a soft encoding indicating which
answers are applicable, where 1.0 is the highest score.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViltForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
Predicted answer: 2
ViltForImagesAndTextClassification
class transformers.ViltForImagesAndTextClassification
<
source
>
(
config
)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_images, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, num_images, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_images, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Vilt Model transformer with a classifier head on top for natural language visual reasoning, e.g. NLVR2.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.vilt.modeling_vilt.ViltForImagesAndTextClassificationOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Binary classification labels.
Returns
transformers.models.vilt.modeling_vilt.ViltForImagesAndTextClassificationOutput or tuple(torch.FloatTensor)
A transformers.models.vilt.modeling_vilt.ViltForImagesAndTextClassificationOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (List[tuple(torch.FloatTensor)], optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — List of tuples of torch.FloatTensor (one for each image-text pair, each tuple containing the output of
the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (List[tuple(torch.FloatTensor)], optional, returned when output_attentions=True is passed or when config.output_attentions=True) — List of tuples of torch.FloatTensor (one for each image-text pair, each tuple containing the attention
weights of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the
attention softmax, used to compute the weighted average in the self-attention heads.
The ViltForImagesAndTextClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import ViltProcessor, ViltForImagesAndTextClassification
import requests
from PIL import Image
image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw)
image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw)
text = "The left image contains twice the number of dogs as the right image."
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2")
model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2")
# prepare inputs
encoding = processor([image1, image2], text, return_tensors="pt")
# forward pass
outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0))
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
Predicted answer: True
ViltForImageAndTextRetrieval
class transformers.ViltForImageAndTextRetrieval
<
source
>
(
config
)
Parameters
config (ViltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Vilt Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the [CLS]
token) for image-to-text or text-to-image retrieval, e.g. MSCOCO and F30K.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels are currently not supported.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViltForImageAndTextRetrieval forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import ViltProcessor, ViltForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-coco")
model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-coco")
# forward pass
scores = dict()
for text in texts:
... # prepare inputs
... encoding = processor(image, text, return_tensors="pt")
... outputs = model(**encoding)
... scores[text] = outputs.logits[0, :].item()
ViltForTokenClassification
class transformers.ViltForTokenClassification
<
source
>
(
config
)
Parameters
config (ViltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ViLT Model with a token classification head on top (a linear layer on top of the final hidden-states of the text
tokens) e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
pixel_values: typing.Optional[torch.FloatTensor] = None
pixel_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
image_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See
PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input
IDs?
attention_mask (torch.FloatTensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape ({0}), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ViltImageProcessor.call() for details.
pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
1 for pixels that are real (i.e. not masked),
0 for pixels that are padding (i.e. masked).
What are attention masks? <../glossary.html#attention-mask>__
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) —
Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert pixel_values into patch embeddings.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, text_sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ViltForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
←TVLT
Vision Encoder Decoder Models→
ViLT
Overview
ViltConfig
ViltFeatureExtractor
ViltImageProcessor
ViltProcessor
ViltModel
ViltForMaskedLM
ViltForQuestionAnswering
ViltForImagesAndTextClassification
ViltForImageAndTextRetrieval
ViltForTokenClassification
|
ProphetNet
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
ProphetNet is an encoder-decoder model and can predict n-future tokens for “ngram” language modeling instead of just
the next token.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
Tips:
ProphetNet is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.
The Authors’ code can be found here.
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
ProphetNetConfig
class transformers.ProphetNetConfig
<
source
>
(
activation_dropout: typing.Optional[float] = 0.1
activation_function: typing.Union[str, typing.Callable, NoneType] = 'gelu'
vocab_size: typing.Optional[int] = 30522
hidden_size: typing.Optional[int] = 1024
encoder_ffn_dim: typing.Optional[int] = 4096
num_encoder_layers: typing.Optional[int] = 12
num_encoder_attention_heads: typing.Optional[int] = 16
decoder_ffn_dim: typing.Optional[int] = 4096
num_decoder_layers: typing.Optional[int] = 12
num_decoder_attention_heads: typing.Optional[int] = 16
attention_dropout: typing.Optional[float] = 0.1
dropout: typing.Optional[float] = 0.1
max_position_embeddings: typing.Optional[int] = 512
init_std: typing.Optional[float] = 0.02
is_encoder_decoder: typing.Optional[bool] = True
add_cross_attention: typing.Optional[bool] = True
decoder_start_token_id: typing.Optional[int] = 0
ngram: typing.Optional[int] = 2
num_buckets: typing.Optional[int] = 32
relative_max_distance: typing.Optional[int] = 128
disable_ngram_loss: typing.Optional[bool] = False
eps: typing.Optional[float] = 0.0
use_cache: typing.Optional[bool] = True
pad_token_id: typing.Optional[int] = 0
bos_token_id: typing.Optional[int] = 1
eos_token_id: typing.Optional[int] = 2
**kwargs
)
Parameters
activation_dropout (float, optional, defaults to 0.1) —
The dropout ratio for activations inside the fully connected layer.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the ProphetNET model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling ProphetNetModel.
hidden_size (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
num_encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
num_encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the intermediate (often named feed-forward) layer in decoder.
num_decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
num_decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
add_cross_attention (bool, optional, defaults to True) —
Whether cross-attention layers should be added to the model.
is_encoder_decoder (bool, optional, defaults to True) —
Whether this is an encoder/decoder model.
pad_token_id (int, optional, defaults to 1) —
Padding token id.
bos_token_id (int, optional, defaults to 0) —
Beginning of stream token id.
eos_token_id (int, optional, defaults to 2) —
End of stream token id.
ngram (int, optional, defaults to 2) —
Number of future tokens to predict. Set to 1 to be same as traditional Language model to predict next first
token.
num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer. This is for relative position calculation. See the
[T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
relative_max_distance (int, optional, defaults to 128) —
Relative distances greater than this number will be put into the last same bucket. This is for relative
position calculation. See the [T5 paper](see https://arxiv.org/abs/1910.10683) for more details.
disable_ngram_loss (bool, optional, defaults to False) —
Whether be trained predicting only the next first token.
eps (float, optional, defaults to 0.0) —
Controls the epsilon parameter value for label smoothing in the loss calculation. If set to 0, no label
smoothing is performed.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a ProphetNetModel. It is used to instantiate a
ProphetNet model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the ProphetNet
microsoft/prophetnet-large-uncased architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
ProphetNetTokenizer
class transformers.ProphetNetTokenizer
<
source
>
(
vocab_file: str
do_lower_case: typing.Optional[bool] = True
do_basic_tokenize: typing.Optional[bool] = True
never_split: typing.Optional[typing.Iterable] = None
unk_token: typing.Optional[str] = '[UNK]'
sep_token: typing.Optional[str] = '[SEP]'
x_sep_token: typing.Optional[str] = '[X_SEP]'
pad_token: typing.Optional[str] = '[PAD]'
mask_token: typing.Optional[str] = '[MASK]'
tokenize_chinese_chars: typing.Optional[bool] = True
strip_accents: typing.Optional[bool] = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
x_sep_token (str, optional, defaults to "[X_SEP]") —
Special second separator token, which can be generated by ProphetNetForConditionalGeneration. It is
used to separate bullet-point like sentences in summarization, e.g..
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a ProphetNetTokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
convert_tokens_to_string
<
source
>
(
tokens: str
)
Converts a sequence of tokens (string) in a single string.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ProphetNet
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: typing.Optional[bool] = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
ProphetNet specific outputs
class transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
logits_ngram: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_ngram_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_ngram_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, decoder_sequence_length, config.vocab_size)) —
Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) —
Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, encoder_sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length). Attentions weights of the encoder, after the attention
softmax, used to compute the weighted average in the self-attention heads.
Base class for sequence-to-sequence language models outputs.
class transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
<
source
>
(
last_hidden_state: FloatTensor
last_hidden_state_ngram: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_ngram_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
decoder_ngram_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, decoder_sequence_length, hidden_size)) —
Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
last_hidden_state_ngram (torch.FloatTensor of shape (batch_size,ngram * decoder_sequence_length, config.vocab_size), optional) —
Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, encoder_sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential
decoding.
class transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
<
source
>
(
last_hidden_state: FloatTensor
last_hidden_state_ngram: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
hidden_states_ngram: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
ngram_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, decoder_sequence_length, hidden_size)) —
Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
last_hidden_state_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) —
Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).
class transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
logits_ngram: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
hidden_states_ngram: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
ngram_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) —
Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, decoder_sequence_length, config.vocab_size)) —
Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) —
Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) —
List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).
ProphetNetModel
class transformers.ProphetNetModel
<
source
>
(
config: ProphetNetConfig
)
Parameters
config (ProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ProphetNet Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
ProphetNet uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ProphenetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, decoder_sequence_length, hidden_size)) — Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
last_hidden_state_ngram (torch.FloatTensor of shape (batch_size,ngram * decoder_sequence_length, config.vocab_size), optional) — Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, encoder_sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The ProphetNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ProphetNetModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
model = ProphetNetModel.from_pretrained("microsoft/prophetnet-large-uncased")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state # main stream hidden states
last_hidden_states_ngram = outputs.last_hidden_state_ngram # predict hidden states
ProphetNetEncoder
class transformers.ProphetNetEncoder
<
source
>
(
config: ProphetNetConfig
word_embeddings: Embedding = None
)
Parameters
config (ProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The standalone encoder part of the ProphetNetModel.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
word_embeddings (torch.nn.Embeddings of shape (config.vocab_size, config.hidden_size), optional):
The word embedding parameters. This can be used to initialize ProphetNetEncoder with pre-defined word
embeddings instead of randomly initialized word embeddings.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ProphenetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The ProphetNetEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ProphetNetEncoder
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
model = ProphetNetEncoder.from_pretrained("patrickvonplaten/prophetnet-large-uncased-standalone")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ProphetNetDecoder
class transformers.ProphetNetDecoder
<
source
>
(
config: ProphetNetConfig
word_embeddings: typing.Optional[torch.nn.modules.sparse.Embedding] = None
)
Parameters
config (ProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The standalone decoder part of the ProphetNetModel.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
word_embeddings (torch.nn.Embeddings of shape (config.vocab_size, config.hidden_size), optional):
The word embedding parameters. This can be used to initialize ProphetNetEncoder with pre-defined word
embeddings instead of randomly initialized word embeddings.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
Returns
transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput or tuple(torch.FloatTensor)
A transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ProphenetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, decoder_sequence_length, hidden_size)) — Sequence of main stream hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
last_hidden_state_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) — Sequence of predict stream hidden-states at the output of the last layer of the decoder of the model.
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
The ProphetNetDecoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ProphetNetDecoder
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
model = ProphetNetDecoder.from_pretrained("microsoft/prophetnet-large-uncased", add_cross_attention=False)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ProphetNetForConditionalGeneration
class transformers.ProphetNetForConditionalGeneration
<
source
>
(
config: ProphetNetConfig
)
Parameters
config (ProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The ProphetNet Model with a language modeling head. Can be used for sequence generation tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
ProphetNet uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for
labels in [0, ..., config.vocab_size]
Returns
transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ProphenetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, decoder_sequence_length, config.vocab_size)) — Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) — Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
decoder_ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
decoder_ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, encoder_sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, encoder_sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, encoder_sequence_length). Attentions weights of the encoder, after the attention
softmax, used to compute the weighted average in the self-attention heads.
The ProphetNetForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ProphetNetForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
model = ProphetNetForConditionalGeneration.from_pretrained("microsoft/prophetnet-large-uncased")
input_ids = tokenizer(
... "Studies have been shown that owning a dog is good for you", return_tensors="pt"
... ).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
logits_next_token = outputs.logits # logits to predict next token as usual
logits_ngram_next_tokens = outputs.logits_ngram # logits to predict 2nd, 3rd, ... next tokens
ProphetNetForCausalLM
class transformers.ProphetNetForCausalLM
<
source
>
(
config: ProphetNetConfig
)
Parameters
config (ProphetNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The standalone decoder part of the ProphetNetModel with a lm head on top. The model can be used for causal language modeling.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
Original ProphetNet code can be found here. Checkpoints were converted
from original Fairseq checkpoints. For more information on the checkpoint conversion, please take a look at the
file convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matters related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]
Returns
transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput or tuple(torch.FloatTensor)
A transformers.models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ProphenetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, decoder_sequence_length, config.vocab_size)) — Prediction scores of the main stream language modeling head (scores for each vocabulary token before
SoftMax).
logits_ngram (torch.FloatTensor of shape (batch_size, ngram * decoder_sequence_length, config.vocab_size)) — Prediction scores of the predict stream language modeling head (scores for each vocabulary token before
SoftMax).
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, decoder_sequence_length, hidden_size).
Hidden-states of main stream of the decoder at the output of each layer plus the initial embedding outputs.
ngram_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, ngram * decoder_sequence_length, hidden_size).
Hidden-states of the predict stream of the decoder at the output of each layer plus the initial embedding
outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
ngram_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, decoder_sequence_length, decoder_sequence_length).
Attentions weights of the predict stream of the decoder, after the attention softmax, used to compute the
weighted average in the
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_attn_heads, encoder_sequence_length, decoder_sequence_length).
Attentions weights of the cross-attention layer of the decoder, after the attention softmax, used to
compute the weighted average in the
The ProphetNetForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, ProphetNetForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
model = ProphetNetForCausalLM.from_pretrained("microsoft/prophetnet-large-uncased")
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# Model can also be used with EncoderDecoder framework
from transformers import BertTokenizer, EncoderDecoderModel, AutoTokenizer
import torch
tokenizer_enc = BertTokenizer.from_pretrained("bert-large-uncased")
tokenizer_dec = AutoTokenizer.from_pretrained("microsoft/prophetnet-large-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
... "bert-large-uncased", "microsoft/prophetnet-large-uncased"
... )
ARTICLE = (
... "the us state department said wednesday it had received no "
... "formal word from bolivia that it was expelling the us ambassador there "
... "but said the charges made against him are `` baseless ."
... )
input_ids = tokenizer_enc(ARTICLE, return_tensors="pt").input_ids
labels = tokenizer_dec(
... "us rejects charges against its ambassador in bolivia", return_tensors="pt"
... ).input_ids
outputs = model(input_ids=input_ids, decoder_input_ids=labels[:, :-1], labels=labels[:, 1:])
loss = outputs.loss
←PLBart
QDQBert→
ProphetNet
Overview
Documentation resources
ProphetNetConfig
ProphetNetTokenizer
ProphetNet specific outputs
ProphetNetModel
ProphetNetEncoder
ProphetNetDecoder
ProphetNetForConditionalGeneration
ProphetNetForCausalLM
|
Wav2Vec2-Conformer
Overview
The Wav2Vec2-Conformer was added to an updated version of fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
The official results of the model can be found in Table 3 and Table 4 of the paper.
The Wav2Vec2-Conformer weights were released by the Meta AI team within the Fairseq library.
Tips:
Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the Attention-block with a Conformer-block
as introduced in Conformer: Convolution-augmented Transformer for Speech Recognition.
For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields
an improved word error rate.
Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2.
Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or
rotary position embeddings by setting the correct config.position_embeddings_type.
This model was contributed by patrickvonplaten.
The original code can be found here.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
Wav2Vec2ConformerConfig
class transformers.Wav2Vec2ConformerConfig
<
source
>
(
vocab_size = None
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
feat_quantizer_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
num_codevectors_per_group = 320
num_codevector_groups = 2
contrastive_logits_temperature = 0.1
num_negatives = 100
codevector_dim = 256
proj_codevector_dim = 256
diversity_loss_weight = 0.1
ctc_loss_reduction = 'sum'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
tdnn_dim = (512, 512, 512, 512, 1500)
tdnn_kernel = (5, 3, 3, 1, 1)
tdnn_dilation = (1, 2, 3, 1, 1)
xvector_output_dim = 512
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
add_adapter = False
adapter_kernel_size = 3
adapter_stride = 2
num_adapter_layers = 3
output_hidden_size = None
position_embeddings_type = 'relative'
rotary_embedding_base = 10000
max_source_positions = 5000
conv_depthwise_kernel_size = 31
conformer_conv_dropout = 0.1
**kwargs
)
Parameters
vocab_size (int, optional) —
Vocabulary size of the Wav2Vec2Conformer model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling Wav2Vec2ConformerModel. Vocabulary size of the
model. Defines the different tokens that can be represented by the inputs_ids passed to the forward
method of Wav2Vec2ConformerModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of Wav2Vec2ConformerForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for quantized feature encoder states.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
num_codevectors_per_group (int, optional, defaults to 320) —
Number of entries in each quantization codebook (group).
num_codevector_groups (int, optional, defaults to 2) —
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (float, optional, defaults to 0.1) —
The temperature kappa in the contrastive loss.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.
num_negatives (int, optional, defaults to 100) —
Number of negative samples for the contrastive loss.
codevector_dim (int, optional, defaults to 256) —
Dimensionality of the quantized feature vectors.
proj_codevector_dim (int, optional, defaults to 256) —
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (int, optional, defaults to 0.1) —
The weight of the codebook diversity loss component.
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of Wav2Vec2ConformerForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of Wav2Vec2ConformerForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of Wav2Vec2ConformerForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 1500)) —
A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN
module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.
tdnn_kernel (Tuple[int] or List[int], optional, defaults to (5, 3, 3, 1, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the
XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.
tdnn_dilation (Tuple[int] or List[int], optional, defaults to (1, 2, 3, 1, 1)) —
A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the
XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.
xvector_output_dim (int, optional, defaults to 512) —
Dimensionality of the XVector embedding vectors.
add_adapter (bool, optional, defaults to False) —
Whether a convolutional network should be stacked on top of the Wav2Vec2Conformer Encoder. Can be very
useful for warm-starting Wav2Vec2Conformer for SpeechEncoderDecoder models.
adapter_kernel_size (int, optional, defaults to 3) —
Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
adapter_stride (int, optional, defaults to 2) —
Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True.
num_adapter_layers (int, optional, defaults to 3) —
Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True.
output_hidden_size (int, optional) —
Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant
if add_adapter is True.
position_embeddings_type (str, optional, defaults to "relative") —
Can be specified to relative or rotary for relative or rotary position embeddings respectively. If left
None no relative position embedding is applied.
rotary_embedding_base (int, optional, defaults to 10000) —
If "rotary" position embeddings are used, defines the size of the embedding base.
max_source_positions (int, optional, defaults to 5000) —
if "relative" position embeddings are used, defines the maximum source input positions.
conv_depthwise_kernel_size (int, defaults to 31) —
Kernel size of convolutional depthwise 1D layer in Conformer blocks.
conformer_conv_dropout (float, defaults to 0.1) —
The dropout probability for all convolutional layers in Conformer blocks.
This is the configuration class to store the configuration of a Wav2Vec2ConformerModel. It is used to
instantiate an Wav2Vec2Conformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2Conformer
facebook/wav2vec2-conformer-rel-pos-large
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Wav2Vec2ConformerConfig, Wav2Vec2ConformerModel
# Initializing a Wav2Vec2Conformer facebook/wav2vec2-conformer-rel-pos-large style configuration
configuration = Wav2Vec2ConformerConfig()
# Initializing a model (with random weights) from the facebook/wav2vec2-conformer-rel-pos-large style configuration
model = Wav2Vec2ConformerModel(configuration)
# Accessing the model configuration
configuration = model.config
Wav2Vec2Conformer specific outputs
class transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
projected_states: FloatTensor = None
projected_quantized_states: FloatTensor = None
codevector_perplexity: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
contrastive_loss: typing.Optional[torch.FloatTensor] = None
diversity_loss: typing.Optional[torch.FloatTensor] = None
)
Parameters
loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
contrastive_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) —
The contrastive loss (L_m) as stated in the official paper .
diversity_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) —
The diversity loss (L_d) as stated in the official paper .
Output type of Wav2Vec2ConformerForPreTraining, with potential hidden states and attentions.
Wav2Vec2ConformerModel
class transformers.Wav2Vec2ConformerModel
<
source
>
(
config: Wav2Vec2ConformerConfig
)
Parameters
config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Wav2Vec2Conformer Model transformer outputting raw hidden-states without any specific head on top.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-conformer-rel-pos-large,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ConformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Wav2Vec2ConformerModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
model = Wav2Vec2ConformerModel.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 1024]
Wav2Vec2ConformerForCTC
class transformers.Wav2Vec2ConformerForCTC
<
source
>
(
config
target_lang = None
)
Parameters
config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-conformer-rel-pos-large,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ConformerForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
64.21
Wav2Vec2ConformerForSequenceClassification
class transformers.Wav2Vec2ConformerForSequenceClassification
<
source
>
(
config
)
Parameters
config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a sequence classification head on top (a linear layer over the pooled output) for
tasks like SUPERB Keyword Spotting.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-conformer-rel-pos-large,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ConformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
model = Wav2Vec2ConformerForSequenceClassification.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
Wav2Vec2ConformerForAudioFrameClassification
class transformers.Wav2Vec2ConformerForAudioFrameClassification
<
source
>
(
config
)
Parameters
config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a frame classification head on top for tasks like Speaker Diarization.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-conformer-rel-pos-large,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ConformerForAudioFrameClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
model = Wav2Vec2ConformerForAudioFrameClassification.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
with torch.no_grad():
... logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
Wav2Vec2ConformerForXVector
class transformers.Wav2Vec2ConformerForXVector
<
source
>
(
config
)
Parameters
config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with an XVector feature extraction head on top for tasks like Speaker Verification.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-conformer-rel-pos-large,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.XVectorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax.
embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Wav2Vec2ConformerForXVector forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
model = Wav2Vec2ConformerForXVector.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")
# audio file is decoded on the fly
inputs = feature_extractor(
... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
with torch.no_grad():
... embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.7 # the optimal threshold is dataset-dependent
if similarity < threshold:
... print("Speakers are not the same!")
Wav2Vec2ConformerForPreTraining
class transformers.Wav2Vec2ConformerForPreTraining
<
source
>
(
config: Wav2Vec2ConformerConfig
)
Parameters
config (Wav2Vec2ConformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Wav2Vec2Conformer Model with a quantizer and VQ head on top.
Wav2Vec2Conformer was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch nn.Module sub-class. Use it as a
regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.BoolTensor] = None
sampled_negative_indices: typing.Optional[torch.BoolTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
wav2vec2-conformer-rel-pos-large,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
mask_time_indices (torch.BoolTensor of shape (batch_size, sequence_length), optional) —
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in config.proj_codevector_dim space.
sampled_negative_indices (torch.BoolTensor of shape (batch_size, sequence_length, num_negatives), optional) —
Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss.
Required input for pre-training.
Returns
transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Wav2Vec2ConformerConfig) and inputs.
loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
contrastive_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) — The contrastive loss (L_m) as stated in the official paper .
diversity_loss (optional, returned when sample_negative_indices are passed, torch.FloatTensor of shape (1,)) — The diversity loss (L_d) as stated in the official paper .
The Wav2Vec2ConformerForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoFeatureExtractor, Wav2Vec2ConformerForPreTraining
from transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer import (
... _compute_mask_indices,
... _sample_negative_indices,
... )
from datasets import load_dataset
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large")
model = Wav2Vec2ConformerForPreTraining.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
# compute masked indices
batch_size, raw_sequence_length = input_values.shape
sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()
mask_time_indices = _compute_mask_indices(
... shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2
... )
sampled_negative_indices = _sample_negative_indices(
... features_shape=(batch_size, sequence_length),
... num_negatives=model.config.num_negatives,
... mask_time_indices=mask_time_indices,
... )
mask_time_indices = torch.tensor(data=mask_time_indices, device=input_values.device, dtype=torch.long)
sampled_negative_indices = torch.tensor(
... data=sampled_negative_indices, device=input_values.device, dtype=torch.long
... )
with torch.no_grad():
... outputs = model(input_values, mask_time_indices=mask_time_indices)
# compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)
# show that cosine similarity is much higher than random
cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5
tensor(True)
# for contrastive loss training model should be put into train mode
model = model.train()
loss = model(
... input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices
... ).loss
←Wav2Vec2
Wav2Vec2Phoneme→
Wav2Vec2-Conformer
Overview
Documentation resources
Wav2Vec2ConformerConfig
Wav2Vec2Conformer specific outputs
Wav2Vec2ConformerModel
Wav2Vec2ConformerForCTC
Wav2Vec2ConformerForSequenceClassification
Wav2Vec2ConformerForAudioFrameClassification
Wav2Vec2ConformerForXVector
Wav2Vec2ConformerForPreTraining
|
DeBERTa
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s
BERT model released in 2018 and Facebook’s RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj . The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Text Classification
A blog post on how to Accelerate Large Model Training using DeepSpeed with DeBERTa.
A blog post on Supercharged Customer Service with Machine Learning with DeBERTa.
DebertaForSequenceClassification is supported by this example script and notebook.
TFDebertaForSequenceClassification is supported by this example script and notebook.
Text classification task guide
Token Classification
DebertaForTokenClassification is supported by this example script and notebook.
TFDebertaForTokenClassification is supported by this example script and notebook.
Token classification chapter of the 🤗 Hugging Face Course.
Byte-Pair Encoding tokenization chapter of the 🤗 Hugging Face Course.
Token classification task guide
Fill-Mask
DebertaForMaskedLM is supported by this example script and notebook.
TFDebertaForMaskedLM is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
Question Answering
DebertaForQuestionAnswering is supported by this example script and notebook.
TFDebertaForQuestionAnswering is supported by this example script and notebook.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
DebertaConfig
class transformers.DebertaConfig
<
source
>
(
vocab_size = 50265
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 0
initializer_range = 0.02
layer_norm_eps = 1e-07
relative_attention = False
max_relative_positions = -1
pad_token_id = 0
position_biased_input = True
pos_att_type = None
pooler_dropout = 0
pooler_hidden_act = 'gelu'
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the DeBERTa model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling DebertaModel or TFDebertaModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu", "gelu", "tanh", "gelu_fast", "mish", "linear", "sigmoid" and "gelu_new"
are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling DebertaModel or TFDebertaModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
relative_attention (bool, optional, defaults to False) —
Whether use relative position encoding.
max_relative_positions (int, optional, defaults to 1) —
The range of relative positions [-max_position_embeddings, max_position_embeddings]. Use the same value
as max_position_embeddings.
pad_token_id (int, optional, defaults to 0) —
The value used to pad input_ids.
position_biased_input (bool, optional, defaults to True) —
Whether add absolute position embedding to content embedding.
pos_att_type (List[str], optional) —
The type of relative position attention, it can be a combination of ["p2c", "c2p"], e.g. ["p2c"],
["p2c", "c2p"].
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a DebertaModel or a TFDebertaModel. It is
used to instantiate a DeBERTa model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa
microsoft/deberta-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import DebertaConfig, DebertaModel
# Initializing a DeBERTa microsoft/deberta-base style configuration
configuration = DebertaConfig()
# Initializing a model (with random weights) from the microsoft/deberta-base style configuration
model = DebertaModel(configuration)
# Accessing the model configuration
configuration = model.config
DebertaTokenizer
class transformers.DebertaTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '[CLS]'
eos_token = '[SEP]'
sep_token = '[SEP]'
cls_token = '[CLS]'
unk_token = '[UNK]'
pad_token = '[PAD]'
mask_token = '[MASK]'
add_prefix_space = False
add_bos_token = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "[CLS]") —
The beginning of sequence token.
eos_token (str, optional, defaults to "[SEP]") —
The end of sequence token.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Deberta tokenizer detect beginning of words by the preceding space).
add_bos_token (bool, optional, defaults to False) —
Whether or not to add an initial <|endoftext|> to the input. This allows to treat the leading word just as
any other word.
Construct a DeBERTa tokenizer. Based on byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import DebertaTokenizer
tokenizer = DebertaTokenizer.from_pretrained("microsoft/deberta-base")
tokenizer("Hello world")["input_ids"]
[1, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[1, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A DeBERTa sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model or encode_plus methods.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
DebertaTokenizerFast
class transformers.DebertaTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '[CLS]'
eos_token = '[SEP]'
sep_token = '[SEP]'
cls_token = '[CLS]'
unk_token = '[UNK]'
pad_token = '[PAD]'
mask_token = '[MASK]'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
tokenizer_file (str, optional) —
The path to a tokenizer file to use instead of the vocab file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "[CLS]") —
The beginning of sequence token.
eos_token (str, optional, defaults to "[SEP]") —
The end of sequence token.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Deberta tokenizer detect beginning of words by the preceding space).
Construct a “fast” DeBERTa tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level
Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import DebertaTokenizerFast
tokenizer = DebertaTokenizerFast.from_pretrained("microsoft/deberta-base")
tokenizer("Hello world")["input_ids"]
[1, 31414, 232, 2]
tokenizer(" Hello world")["input_ids"]
[1, 20920, 232, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since
the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A DeBERTa sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
DebertaModel
class transformers.DebertaModel
<
source
>
(
config
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaModel
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base")
model = DebertaModel.from_pretrained("microsoft/deberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
DebertaPreTrainedModel
class transformers.DebertaPreTrainedModel
<
source
>
(
config: PretrainedConfig
*inputs
**kwargs
)
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
DebertaForMaskedLM
class transformers.DebertaForMaskedLM
<
source
>
(
config
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a language modeling head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("lsanochkin/deberta-large-feedback")
model = DebertaForMaskedLM.from_pretrained("lsanochkin/deberta-large-feedback")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
' Paris'
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
round(outputs.loss.item(), 2)
0.54
DebertaForSequenceClassification
class transformers.DebertaForSequenceClassification
<
source
>
(
config
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, DebertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base")
model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, DebertaForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base")
model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = DebertaForSequenceClassification.from_pretrained(
... "microsoft/deberta-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
DebertaForTokenClassification
class transformers.DebertaForTokenClassification
<
source
>
(
config
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base")
model = DebertaForTokenClassification.from_pretrained("microsoft/deberta-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
DebertaForQuestionAnswering
class transformers.DebertaForQuestionAnswering
<
source
>
(
config
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DebertaConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DebertaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, DebertaForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("Palak/microsoft_deberta-large_squad")
model = DebertaForQuestionAnswering.from_pretrained("Palak/microsoft_deberta-large_squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens, skip_special_tokens=True)
' a nice puppet'
# target is "nice puppet"
target_start_index = torch.tensor([12])
target_end_index = torch.tensor([14])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
round(loss.item(), 2)
0.14
TFDebertaModel
class transformers.TFDebertaModel
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base")
model = TFDebertaModel.from_pretrained("kamalkraj/deberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFDebertaPreTrainedModel
class transformers.TFDebertaPreTrainedModel
<
source
>
(
*args
**kwargs
)
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
call
<
source
>
(
inputs
training = None
mask = None
)
Calls the model on new inputs and returns the outputs as tensors.
In this case call() just reapplies
all ops in the graph to the new inputs
(e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be
overridden when subclassing tf.keras.Model.
To call a model on an input, always use the __call__() method,
i.e. model(inputs), which relies on the underlying call() method.
TFDebertaForMaskedLM
class transformers.TFDebertaForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a language modeling head on top.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base")
model = TFDebertaForMaskedLM.from_pretrained("kamalkraj/deberta-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFDebertaForSequenceClassification
class transformers.TFDebertaForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base")
model = TFDebertaForSequenceClassification.from_pretrained("kamalkraj/deberta-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFDebertaForSequenceClassification.from_pretrained("kamalkraj/deberta-base", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFDebertaForTokenClassification
class transformers.TFDebertaForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base")
model = TFDebertaForTokenClassification.from_pretrained("kamalkraj/deberta-base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFDebertaForQuestionAnswering
class transformers.TFDebertaForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (DebertaConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled
Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build
on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two
improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data.
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple.
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DebertaConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDebertaForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFDebertaForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base")
model = TFDebertaForQuestionAnswering.from_pretrained("kamalkraj/deberta-base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←CTRL
DeBERTa-v2→
DeBERTa
Overview
Resources
DebertaConfig
DebertaTokenizer
DebertaTokenizerFast
DebertaModel
DebertaPreTrainedModel
DebertaForMaskedLM
DebertaForSequenceClassification
DebertaForTokenClassification
DebertaForQuestionAnswering
TFDebertaModel
TFDebertaPreTrainedModel
TFDebertaForMaskedLM
TFDebertaForSequenceClassification
TFDebertaForTokenClassification
TFDebertaForQuestionAnswering
|
Transformer XL
Overview
The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
Salakhutdinov. It’s a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can
reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax
inputs and outputs (tied).
The abstract from the paper is the following:
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a
novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the
context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450%
longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+
times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of
bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn
Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
coherent, novel text articles with thousands of tokens.
Tips:
Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The
original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
Transformer-XL is one of the few models that has no sequence length limit.
Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model.
Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments.
This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed.
This model was contributed by thomwolf. The original code can be found here.
TransformerXL does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Documentation resources
Text classification task guide
Causal language modeling task guide
TransfoXLConfig
class transformers.TransfoXLConfig
<
source
>
(
vocab_size = 267735
cutoffs = [20000, 40000, 200000]
d_model = 1024
d_embed = 1024
n_head = 16
d_head = 64
d_inner = 4096
div_val = 4
pre_lnorm = False
n_layer = 18
mem_len = 1600
clamp_len = 1000
same_length = True
proj_share_all_but_first = True
attn_type = 0
sample_softmax = -1
adaptive = True
dropout = 0.1
dropatt = 0.0
untie_r = True
init = 'normal'
init_range = 0.01
proj_init_std = 0.01
init_std = 0.02
layer_norm_epsilon = 1e-05
eos_token_id = 0
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 267735) —
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling TransfoXLModel or TFTransfoXLModel.
cutoffs (List[int], optional, defaults to [20000, 40000, 200000]) —
Cutoffs for the adaptive softmax.
d_model (int, optional, defaults to 1024) —
Dimensionality of the model’s hidden states.
d_embed (int, optional, defaults to 1024) —
Dimensionality of the embeddings
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
d_head (int, optional, defaults to 64) —
Dimensionality of the model’s heads.
d_inner (int, optional, defaults to 4096) —
Inner dimension in FF
div_val (int, optional, defaults to 4) —
Divident value for adapative input and softmax
pre_lnorm (boolean, optional, defaults to False) —
Whether or not to apply LayerNorm to the input instead of the output in the blocks.
n_layer (int, optional, defaults to 18) —
Number of hidden layers in the Transformer encoder.
mem_len (int, optional, defaults to 1600) —
Length of the retained previous heads.
clamp_len (int, optional, defaults to 1000) —
Use the same pos embeddings after clamp_len.
same_length (boolean, optional, defaults to True) —
Whether or not to use the same attn length for all tokens
proj_share_all_but_first (boolean, optional, defaults to True) —
True to share all but first projs, False not to share.
attn_type (int, optional, defaults to 0) —
Attention type. 0 for Transformer-XL, 1 for Shaw et al, 2 for Vaswani et al, 3 for Al Rfou et al.
sample_softmax (int, optional, defaults to -1) —
Number of samples in the sampled softmax.
adaptive (boolean, optional, defaults to True) —
Whether or not to use adaptive softmax.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
dropatt (float, optional, defaults to 0) —
The dropout ratio for the attention probabilities.
untie_r (boolean, optional, defaults to True) —
Whether ot not to untie relative position biases.
init (str, optional, defaults to "normal") —
Parameter initializer to use.
init_range (float, optional, defaults to 0.01) —
Parameters initialized by U(-init_range, init_range).
proj_init_std (float, optional, defaults to 0.01) —
Parameters initialized by N(0, init_std)
init_std (float, optional, defaults to 0.02) —
Parameters initialized by N(0, init_std)
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers
This is the configuration class to store the configuration of a TransfoXLModel or a TFTransfoXLModel. It is
used to instantiate a Transformer-XL model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the TransfoXL
transfo-xl-wt103 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import TransfoXLConfig, TransfoXLModel
# Initializing a Transformer XL configuration
configuration = TransfoXLConfig()
# Initializing a model (with random weights) from the configuration
model = TransfoXLModel(configuration)
# Accessing the model configuration
configuration = model.config
TransfoXLTokenizer
class transformers.TransfoXLTokenizer
<
source
>
(
special = None
min_freq = 0
max_size = None
lower_case = False
delimiter = None
vocab_file = None
pretrained_vocab_file: str = None
never_split = None
unk_token = '<unk>'
eos_token = '<eos>'
additional_special_tokens = ['<formula>']
language = 'en'
**kwargs
)
Parameters
special (List[str], optional) —
A list of special tokens (to be treated by the original implementation of this tokenizer).
min_freq (int, optional, defaults to 0) —
The minimum number of times a token has to be present in order to be kept in the vocabulary (otherwise it
will be mapped to unk_token).
max_size (int, optional) —
The maximum size of the vocabulary. If left unset, it will default to the size of the vocabulary found
after excluding the tokens according to the min_freq rule.
lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
delimiter (str, optional) —
The delimiter used between tokens.
vocab_file (str, optional) —
File containing the vocabulary (from the original implementation).
pretrained_vocab_file (str, optional) —
File containing the vocabulary as saved with the save_pretrained() method.
never_split (List[str], optional) —
List of tokens that should never be split. If no list is specified, will simply use the existing special
tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
eos_token (str, optional, defaults to "<eos>") —
The end of sequence token.
additional_special_tokens (List[str], optional, defaults to ["<formula>"]) —
A list of additional special tokens (for the HuggingFace functionality).
language (str, optional, defaults to "en") —
The language of this tokenizer (used for mose preprocessing).
Construct a Transformer-XL tokenizer adapted from Vocab class in the original
code. The Transformer-XL tokenizer is a word-level tokenizer (no
sub-word tokenization).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
TransfoXL specific outputs
class transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput
<
source
>
(
last_hidden_state: FloatTensor
mems: typing.List[torch.FloatTensor] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).
class transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput
<
source
>
(
losses: typing.Optional[torch.FloatTensor] = None
prediction_scores: FloatTensor = None
mems: typing.List[torch.FloatTensor] = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
loss: typing.Optional[torch.FloatTensor] = None
)
Parameters
losses (torch.FloatTensor of shape (batch_size, sequence_length-1), optional, returned when labels is provided) —
Language modeling losses (not reduced).
prediction_scores (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
loss (torch.FloatTensor of shape (), optional, returned when labels is provided) —
Reduced language modeling loss.
Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).
class transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput
<
source
>
(
last_hidden_state: tf.Tensor = None
mems: List[tf.Tensor] = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) —
Sequence of hidden-states at the output of the last layer of the model.
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).
class transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput
<
source
>
(
prediction_scores: tf.Tensor = None
mems: List[tf.Tensor] = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
losses (tf.Tensor of shape (batch_size, sequence_length-1), optional, returned when labels is provided) —
Language modeling losses (not reduced).
prediction_scores (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) —
Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding).
TransfoXLModel
class transformers.TransfoXLModel
<
source
>
(
config
)
Parameters
config (TransfoXLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
mems output below). Can be used to speed up sequential decoding. The token ids which have their mems
given to this model should not be passed as input_ids as they have already been computed.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput or tuple(torch.FloatTensor)
A transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TransfoXLConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TransfoXLModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TransfoXLModel
import torch
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLModel.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
TransfoXLLMHeadModel
class transformers.TransfoXLLMHeadModel
<
source
>
(
config
)
Parameters
config (TransfoXLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Transformer-XL Model with a language modeling head on top (adaptive softmax with weights tied to the adaptive
input embeddings)
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
mems output below). Can be used to speed up sequential decoding. The token ids which have their mems
given to this model should not be passed as input_ids as they have already been computed.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput or tuple(torch.FloatTensor)
A transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TransfoXLConfig) and inputs.
losses (torch.FloatTensor of shape (batch_size, sequence_length-1), optional, returned when labels is provided) — Language modeling losses (not reduced).
prediction_scores (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
loss (torch.FloatTensor of shape (), optional, returned when labels is provided)
Reduced language modeling loss.
The TransfoXLLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, TransfoXLLMHeadModel
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLLMHeadModel.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
logits = outputs.logits
TransfoXLForSequenceClassification
class transformers.TransfoXLForSequenceClassification
<
source
>
(
config
)
Parameters
config (TransfoXLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Transformer-XL Model transformer with a sequence classification head on top (linear layer).
TransfoXLForSequenceClassification uses the last token in order to do the classification, as other causal
models (e.g. GPT-1) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
mems: typing.Optional[typing.List[torch.FloatTensor]] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLSequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
mems (List[torch.FloatTensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
mems output below). Can be used to speed up sequential decoding. The token ids which have their mems
given to this model should not be passed as input_ids as they have already been computed.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLSequenceClassifierOutputWithPast or tuple(torch.FloatTensor)
A transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLSequenceClassifierOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TransfoXLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
mems (List[torch.FloatTensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TransfoXLForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, TransfoXLForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLForSequenceClassification.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TransfoXLForSequenceClassification.from_pretrained("transfo-xl-wt103", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, TransfoXLForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLForSequenceClassification.from_pretrained("transfo-xl-wt103", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TransfoXLForSequenceClassification.from_pretrained(
... "transfo-xl-wt103", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
TFTransfoXLModel
class transformers.TFTransfoXLModel
<
source
>
(
*args
**kwargs
)
Parameters
config (TransfoXLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Bert Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
mems: List[tf.Tensor] | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
mems output below). Can be used to speed up sequential decoding. The token ids which have their mems
given to this model should not be passed as input_ids as they have already been computed.
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput or tuple(tf.Tensor)
A transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TransfoXLConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFTransfoXLModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFTransfoXLModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TFTransfoXLModel.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFTransfoXLLMHeadModel
class transformers.TFTransfoXLLMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (TransfoXLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Transformer-XL Model with a language modeling head on top (adaptive softmax with weights tied to the adaptive
input embeddings)
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
mems: List[tf.Tensor] | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
mems output below). Can be used to speed up sequential decoding. The token ids which have their mems
given to this model should not be passed as input_ids as they have already been computed.
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput or tuple(tf.Tensor)
A transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TransfoXLConfig) and inputs.
losses (tf.Tensor of shape (batch_size, sequence_length-1), optional, returned when labels is provided) — Language modeling losses (not reduced).
prediction_scores (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token after SoftMax).
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFTransfoXLLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFTransfoXLLMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TFTransfoXLLMHeadModel.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFTransfoXLForSequenceClassification
class transformers.TFTransfoXLForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (TransfoXLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Transfo XL Model transformer with a sequence classification head on top (linear layer).
TFTransfoXLForSequenceClassification uses the last token in order to do the classification, as other causal
models (e.g. GPT-1,GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
mems: List[tf.Tensor] | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLSequenceClassifierOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
mems (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
mems output below). Can be used to speed up sequential decoding. The token ids which have their mems
given to this model should not be passed as input_ids as they have already been computed.
head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLSequenceClassifierOutputWithPast or tuple(tf.Tensor)
A transformers.models.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLSequenceClassifierOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (TransfoXLConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
mems (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see mems
input) to speed up sequential decoding. The token ids which have their past given to this model should not
be passed as input ids as they have already been computed.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFTransfoXLForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFTransfoXLForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("transfo-xl-wt103")
model = TFTransfoXLForSequenceClassification.from_pretrained("transfo-xl-wt103")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFTransfoXLForSequenceClassification.from_pretrained("transfo-xl-wt103", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
Internal Layers
class transformers.AdaptiveEmbedding
<
source
>
(
n_token
d_embed
d_proj
cutoffs
div_val = 1
sample_softmax = False
)
class transformers.TFAdaptiveEmbedding
<
source
>
(
*args
**kwargs
)
←TAPEX
UL2→
Transformer XL
Overview
Documentation resources
TransfoXLConfig
TransfoXLTokenizer
TransfoXL specific outputs
TransfoXLModel
TransfoXLLMHeadModel
TransfoXLForSequenceClassification
TFTransfoXLModel
TFTransfoXLLMHeadModel
TFTransfoXLForSequenceClassification
Internal Layers
|
The documentation page MODEL_DOC/REALM.HTML doesn’t exist in v4.31.0, but exists on the main version. Click here to redirect to the main version of the documentation. |
Time Series Transformer
This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The Time Series Transformer model is a vanilla encoder-decoder Transformer for time series forecasting.
Tips:
Similar to other models in the library, TimeSeriesTransformerModel is the raw Transformer without any head on top, and TimeSeriesTransformerForPrediction
adds a distribution head on top of the former, which can be used for time-series forecasting. Note that this is a so-called probabilistic forecasting model, not a
point forecasting model. This means that the model learns a distribution, from which one can sample. The model doesn’t directly output values.
TimeSeriesTransformerForPrediction consists of 2 blocks: an encoder, which takes a context_length of time series values as input (called past_values),
and a decoder, which predicts a prediction_length of time series values into the future (called future_values). During training, one needs to provide
pairs of (past_values and future_values) to the model.
In addition to the raw (past_values and future_values), one typically provides additional features to the model. These can be the following:past_time_features: temporal features which the model will add to past_values. These serve as “positional encodings” for the Transformer encoder.
Examples are “day of the month”, “month of the year”, etc. as scalar values (and then stacked together as a vector).
e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being “day of the month”, 8 being “month of the year”).
future_time_features: temporal features which the model will add to future_values. These serve as “positional encodings” for the Transformer decoder.
Examples are “day of the month”, “month of the year”, etc. as scalar values (and then stacked together as a vector).
e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being “day of the month”, 8 being “month of the year”).
static_categorical_features: categorical features which are static over time (i.e., have the same value for all past_values and future_values).
An example here is the store ID or region ID that identifies a given time-series.
Note that these features need to be known for ALL data points (also those in the future).
static_real_features: real-valued features which are static over time (i.e., have the same value for all past_values and future_values).
An example here is the image representation of the product for which you have the time-series values (like the ResNet embedding of a “shoe” picture,
if your time-series is about the sales of shoes).
Note that these features need to be known for ALL data points (also those in the future).
The model is trained using “teacher-forcing”, similar to how a Transformer is trained for machine translation. This means that, during training, one shifts the
future_values one position to the right as input to the decoder, prepended by the last value of past_values. At each time step, the model needs to predict the
next target. So the set-up of training is similar to a GPT model for language, except that there’s no notion of decoder_start_token_id (we just use the last value
of the context as initial input for the decoder).
At inference time, we give the final value of the past_values as input to the decoder. Next, we can sample from the model to make a prediction at the next time step,
which is then fed to the decoder in order to make the next prediction (also called autoregressive generation).
This model was contributed by kashif.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Time Series Transformer blog-post in HuggingFace blog: Probabilistic Time Series Forecasting with 🤗 Transformers
TimeSeriesTransformerConfig
class transformers.TimeSeriesTransformerConfig
<
source
>
(
prediction_length: typing.Optional[int] = None
context_length: typing.Optional[int] = None
distribution_output: str = 'student_t'
loss: str = 'nll'
input_size: int = 1
lags_sequence: typing.List[int] = [1, 2, 3, 4, 5, 6, 7]
scaling: typing.Union[str, bool, NoneType] = 'mean'
num_dynamic_real_features: int = 0
num_static_categorical_features: int = 0
num_static_real_features: int = 0
num_time_features: int = 0
cardinality: typing.Optional[typing.List[int]] = None
embedding_dimension: typing.Optional[typing.List[int]] = None
encoder_ffn_dim: int = 32
decoder_ffn_dim: int = 32
encoder_attention_heads: int = 2
decoder_attention_heads: int = 2
encoder_layers: int = 2
decoder_layers: int = 2
is_encoder_decoder: bool = True
activation_function: str = 'gelu'
d_model: int = 64
dropout: float = 0.1
encoder_layerdrop: float = 0.1
decoder_layerdrop: float = 0.1
attention_dropout: float = 0.1
activation_dropout: float = 0.1
num_parallel_samples: int = 100
init_std: float = 0.02
use_cache = True
**kwargs
)
Parameters
prediction_length (int) —
The prediction length for the decoder. In other words, the prediction horizon of the model. This value is
typically dictated by the dataset and we recommend to set it appropriately.
context_length (int, optional, defaults to prediction_length) —
The context length for the encoder. If None, the context length will be the same as the
prediction_length.
distribution_output (string, optional, defaults to "student_t") —
The distribution emission head for the model. Could be either “student_t”, “normal” or “negative_binomial”.
loss (string, optional, defaults to "nll") —
The loss function for the model corresponding to the distribution_output head. For parametric
distributions it is the negative log likelihood (nll) - which currently is the only supported one.
input_size (int, optional, defaults to 1) —
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
multivariate targets.
scaling (string or bool, optional defaults to "mean") —
Whether to scale the input targets via “mean” scaler, “std” scaler or no scaler if None. If True, the
scaler is set to “mean”.
lags_sequence (list[int], optional, defaults to [1, 2, 3, 4, 5, 6, 7]) —
The lags of the input time series as covariates often dictated by the frequency of the data. Default is
[1, 2, 3, 4, 5, 6, 7] but we recommend to change it based on the dataset appropriately.
num_time_features (int, optional, defaults to 0) —
The number of time features in the input time series.
num_dynamic_real_features (int, optional, defaults to 0) —
The number of dynamic real valued features.
num_static_categorical_features (int, optional, defaults to 0) —
The number of static categorical features.
num_static_real_features (int, optional, defaults to 0) —
The number of static real valued features.
cardinality (list[int], optional) —
The cardinality (number of different values) for each of the static categorical features. Should be a list
of integers, having the same length as num_static_categorical_features. Cannot be None if
num_static_categorical_features is > 0.
embedding_dimension (list[int], optional) —
The dimension of the embedding for each of the static categorical features. Should be a list of integers,
having the same length as num_static_categorical_features. Cannot be None if
num_static_categorical_features is > 0.
d_model (int, optional, defaults to 64) —
Dimensionality of the transformer layers.
encoder_layers (int, optional, defaults to 2) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 2) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 2) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 2) —
Number of attention heads for each attention layer in the Transformer decoder.
encoder_ffn_dim (int, optional, defaults to 32) —
Dimension of the “intermediate” (often named feed-forward) layer in encoder.
decoder_ffn_dim (int, optional, defaults to 32) —
Dimension of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and decoder. If string, "gelu" and
"relu" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the encoder, and decoder.
encoder_layerdrop (float, optional, defaults to 0.1) —
The dropout probability for the attention and fully connected layers for each encoder layer.
decoder_layerdrop (float, optional, defaults to 0.1) —
The dropout probability for the attention and fully connected layers for each decoder layer.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention probabilities.
activation_dropout (float, optional, defaults to 0.1) —
The dropout probability used between the two layers of the feed-forward networks.
num_parallel_samples (int, optional, defaults to 100) —
The number of samples to generate in parallel for each time step of inference.
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated normal weight initialization distribution.
use_cache (bool, optional, defaults to True) —
Whether to use the past key/values attentions (if applicable to the model) to speed up decoding.
Example —
This is the configuration class to store the configuration of a TimeSeriesTransformerModel. It is used to
instantiate a Time Series Transformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Time Series
Transformer
huggingface/time-series-transformer-tourism-monthly
architecture.
Configuration objects inherit from PretrainedConfig can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Copied
from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerModel
# Initializing a Time Series Transformer configuration with 12 time steps for prediction
configuration = TimeSeriesTransformerConfig(prediction_length=12)
# Randomly initializing a model (with random weights) from the configuration
model = TimeSeriesTransformerModel(configuration)
# Accessing the model configuration
configuration = model.config
TimeSeriesTransformerModel
class transformers.TimeSeriesTransformerModel
<
source
>
(
config: TimeSeriesTransformerConfig
)
Parameters
config (TimeSeriesTransformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Time Series Transformer Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
past_values: Tensor
past_time_features: Tensor
past_observed_mask: Tensor
static_categorical_features: typing.Optional[torch.Tensor] = None
static_real_features: typing.Optional[torch.Tensor] = None
future_values: typing.Optional[torch.Tensor] = None
future_time_features: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size)) —
Past values of the time series, that serve as context in order to predict the future. The sequence size of
this tensor must be larger than the context_length of the model, since the model will use the larger size
to construct lag features, i.e. additional values from the past which are added in order to serve as “extra
context”.
The sequence_length here is equal to config.context_length + max(config.lags_sequence), which if no
lags_sequence is configured, is equal to config.context_length + 7 (as by default, the largest
look-back index in config.lags_sequence is 7). The property _past_length returns the actual length of
the past.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as
static_categorical_features, static_real_features, past_time_features and lags).
Optionally, missing values need to be replaced with zeros and indicated via the past_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features)) —
Required time features, which the model internally will add to past_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in
[0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) —
Optional static categorical features for which the model will learn an embedding, which it will add to the
values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) —
Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length) or (batch_size, prediction_length, input_size), optional) —
Future values of the time series, that serve as labels for the model. The future_values is what the
Transformer needs during training to learn to output, given the past_values.
The sequence length here is equal to prediction_length.
See the demo notebook and code snippets for details.
Optionally, during training any missing values need to be replaced with zeros and indicated via the
future_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) —
Required time features for the prediction window, which the model internally will add to future_values.
These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as
Fourier features). These could also be so-called “age” features, which basically help the model know “at
which point in life” a time-series is. Age features have small values for distant past time steps and
increase monotonically the more we approach the current time step. Holiday features are also a good example
of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
future_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which future_values were observed and which were missing. Mask values selected
in [0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
This mask is used to filter out missing values for the final loss calculation.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to
make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TimeSeriesTransformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to rescale back to the original magnitude.
static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The TimeSeriesTransformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
import torch
from transformers import TimeSeriesTransformerModel
file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
batch = torch.load(file)
model = TimeSeriesTransformerModel.from_pretrained("huggingface/time-series-transformer-tourism-monthly")
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
last_hidden_state = outputs.last_hidden_state
TimeSeriesTransformerForPrediction
class transformers.TimeSeriesTransformerForPrediction
<
source
>
(
config: TimeSeriesTransformerConfig
)
Parameters
config (TimeSeriesTransformerConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Time Series Transformer Model with a distribution head on top for time-series forecasting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
past_values: Tensor
past_time_features: Tensor
past_observed_mask: Tensor
static_categorical_features: typing.Optional[torch.Tensor] = None
static_real_features: typing.Optional[torch.Tensor] = None
future_values: typing.Optional[torch.Tensor] = None
future_time_features: typing.Optional[torch.Tensor] = None
future_observed_mask: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
output_hidden_states: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
Parameters
past_values (torch.FloatTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size)) —
Past values of the time series, that serve as context in order to predict the future. The sequence size of
this tensor must be larger than the context_length of the model, since the model will use the larger size
to construct lag features, i.e. additional values from the past which are added in order to serve as “extra
context”.
The sequence_length here is equal to config.context_length + max(config.lags_sequence), which if no
lags_sequence is configured, is equal to config.context_length + 7 (as by default, the largest
look-back index in config.lags_sequence is 7). The property _past_length returns the actual length of
the past.
The past_values is what the Transformer encoder gets as input (with optional additional features, such as
static_categorical_features, static_real_features, past_time_features and lags).
Optionally, missing values need to be replaced with zeros and indicated via the past_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
past_time_features (torch.FloatTensor of shape (batch_size, sequence_length, num_features)) —
Required time features, which the model internally will add to past_values. These could be things like
“month of year”, “day of the month”, etc. encoded as vectors (for instance as Fourier features). These
could also be so-called “age” features, which basically help the model know “at which point in life” a
time-series is. Age features have small values for distant past time steps and increase monotonically the
more we approach the current time step. Holiday features are also a good example of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
past_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which past_values were observed and which were missing. Mask values selected in
[0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
static_categorical_features (torch.LongTensor of shape (batch_size, number of static categorical features), optional) —
Optional static categorical features for which the model will learn an embedding, which it will add to the
values of the time series.
Static categorical features are features which have the same value for all time steps (static over time).
A typical example of a static categorical feature is a time series ID.
static_real_features (torch.FloatTensor of shape (batch_size, number of static real features), optional) —
Optional static real features which the model will add to the values of the time series.
Static real features are features which have the same value for all time steps (static over time).
A typical example of a static real feature is promotion information.
future_values (torch.FloatTensor of shape (batch_size, prediction_length) or (batch_size, prediction_length, input_size), optional) —
Future values of the time series, that serve as labels for the model. The future_values is what the
Transformer needs during training to learn to output, given the past_values.
The sequence length here is equal to prediction_length.
See the demo notebook and code snippets for details.
Optionally, during training any missing values need to be replaced with zeros and indicated via the
future_observed_mask.
For multivariate time series, the input_size > 1 dimension is required and corresponds to the number of
variates in the time series per time step.
future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) —
Required time features for the prediction window, which the model internally will add to future_values.
These could be things like “month of year”, “day of the month”, etc. encoded as vectors (for instance as
Fourier features). These could also be so-called “age” features, which basically help the model know “at
which point in life” a time-series is. Age features have small values for distant past time steps and
increase monotonically the more we approach the current time step. Holiday features are also a good example
of time features.
These features serve as the “positional encodings” of the inputs. So contrary to a model like BERT, where
the position encodings are learned from scratch internally as parameters of the model, the Time Series
Transformer requires to provide additional time features. The Time Series Transformer only learns
additional embeddings for static_categorical_features.
Additional dynamic real covariates can be concatenated to this tensor, with the caveat that these features
must but known at prediction time.
The num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.
future_observed_mask (torch.BoolTensor of shape (batch_size, sequence_length) or (batch_size, sequence_length, input_size), optional) —
Boolean mask to indicate which future_values were observed and which were missing. Mask values selected
in [0, 1]:
1 for values that are observed,
0 for values that are missing (i.e. NaNs that were replaced by zeros).
This mask is used to filter out missing values for the final loss calculation.
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on certain token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to
make sure the model can only look at previous inputs in order to predict the future.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of last_hidden_state, hidden_states (optional) and attentions (optional)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) (optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqTSModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqTSModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TimeSeriesTransformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to shift back to the original magnitude.
scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same
magnitude and then used to rescale back to the original magnitude.
static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time.
The TimeSeriesTransformerForPrediction forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from huggingface_hub import hf_hub_download
import torch
from transformers import TimeSeriesTransformerForPrediction
file = hf_hub_download(
... repo_id="hf-internal-testing/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset"
... )
batch = torch.load(file)
model = TimeSeriesTransformerForPrediction.from_pretrained(
... "huggingface/time-series-transformer-tourism-monthly"
... )
# during training, one provides both past and future values
# as well as possible additional features
outputs = model(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_values=batch["future_values"],
... future_time_features=batch["future_time_features"],
... )
loss = outputs.loss
loss.backward()
# during inference, one only provides past values
# as well as possible additional features
# the model autoregressively generates future values
outputs = model.generate(
... past_values=batch["past_values"],
... past_time_features=batch["past_time_features"],
... past_observed_mask=batch["past_observed_mask"],
... static_categorical_features=batch["static_categorical_features"],
... static_real_features=batch["static_real_features"],
... future_time_features=batch["future_time_features"],
... )
mean_prediction = outputs.sequences.mean(dim=1)
←Informer
Graphormer→
Time Series Transformer
Overview
Resources
TimeSeriesTransformerConfig
TimeSeriesTransformerModel
TimeSeriesTransformerForPrediction
|
DeiT
This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The DeiT model was proposed in Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre
Sablayrolles, Hervé Jégou. The Vision Transformer (ViT) introduced in Dosovitskiy et al., 2020 has shown that one can match or even outperform existing convolutional neural
networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on
expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more
efficiently trained transformers for image classification, requiring far less data and far less computing resources
compared to the original ViT models.
The abstract from the paper is the following:
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image
classification. However, these visual transformers are pre-trained with hundreds of millions of images using an
expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free
transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision
transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external
data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation
token ensuring that the student learns from the teacher through attention. We show the interest of this token-based
distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets
for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and
models.
Tips:
Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the
DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with
the class ([CLS]) and patch tokens through the self-attention layers.
There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a
prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction
head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the
distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the
distillation head and the label predicted by the teacher). At inference time, one takes the average prediction
between both heads as final prediction. (2) is also called “fine-tuning with distillation”, because one relies on a
teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to
DeiTForImageClassification and (2) corresponds to
DeiTForImageClassificationWithTeacher.
Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is
trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results.
All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in
contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
pre-training.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into
ViTModel or ViTForImageClassification. Techniques like data
augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
(while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes):
facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and
facebook/deit-base-patch16-384. Note that one should use DeiTImageProcessor in order to
prepare images for the model.
This model was contributed by nielsr. The TensorFlow version of this model was added by amyeroberts.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeiT.
Image Classification
DeiTForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
DeiTForMaskedImageModeling is supported by this example script.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DeiTConfig
class transformers.DeiTConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 224
patch_size = 16
num_channels = 3
qkv_bias = True
encoder_stride = 16
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
encoder_stride (int, optional, defaults to 16) —
Factor to increase the spatial resolution by in the decoder head for masked image modeling.
This is the configuration class to store the configuration of a DeiTModel. It is used to instantiate an DeiT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the DeiT
facebook/deit-base-distilled-patch16-224
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import DeiTConfig, DeiTModel
# Initializing a DeiT deit-base-distilled-patch16-224 style configuration
configuration = DeiTConfig()
# Initializing a model (with random weights) from the deit-base-distilled-patch16-224 style configuration
model = DeiTModel(configuration)
# Accessing the model configuration
configuration = model.config
DeiTFeatureExtractor
class transformers.DeiTFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
DeiTImageProcessor
class transformers.DeiTImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = 3
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_rescale: bool = True
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by
do_resize in preprocess.
size (Dict[str, int] optional, defaults to {"height" -- 256, "width": 256}):
Size of the image after resize. Can be overridden by size in preprocess.
resample (PILImageResampling filter, optional, defaults to PILImageResampling.BICUBIC) —
Resampling filter to use if resizing the image. Can be overridden by resample in preprocess.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image
is padded with 0’s and then center cropped. Can be overridden by do_center_crop in preprocess.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Desired output size when applying center-cropping. Can be overridden by crop_size in preprocess.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a DeiT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
resample = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resize.
resample (PILImageResampling, optional, defaults to self.resample) —
PILImageResampling filter to use if resizing the image Only has an effect if do_resize is set to
True.
do_center_crop (bool, optional, defaults to self.do_center_crop) —
Whether to center crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after center crop. If one edge the image is smaller than crop_size, it will be
padded with zeros and then cropped
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
None: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
DeiTModel
class transformers.DeiTModel
<
source
>
(
config: DeiTConfig
add_pooling_layer: bool = True
use_mask_token: bool = False
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DeiT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DeiTConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DeiTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, DeiTModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = DeiTModel.from_pretrained("facebook/deit-base-distilled-patch16-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 198, 768]
DeiTForMaskedImageModeling
class transformers.DeiTForMaskedImageModeling
<
source
>
(
config: DeiTConfig
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeiT Model with a decoder on top for masked image modeling, as proposed in SimMIM.
Note that we provide a script to pre-train this model on custom data in our examples
directory.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedImageModelingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_outputs.MaskedImageModelingOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedImageModelingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DeiTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Reconstruction loss.
reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed / completed images.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or
when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when
config.output_attentions=True):
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The DeiTForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DeiTForMaskedImageModeling
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = DeiTForMaskedImageModeling.from_pretrained("facebook/deit-base-distilled-patch16-224")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
list(reconstructed_pixel_values.shape)
[1, 3, 224, 224]
DeiTForImageClassification
class transformers.DeiTForImageClassification
<
source
>
(
config: DeiTConfig
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DeiTConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DeiTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, DeiTForImageClassification
import torch
from PIL import Image
import requests
torch.manual_seed(3)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# note: we are loading a DeiTForImageClassificationWithTeacher from the hub here,
# so the head will be randomly initialized, hence the predictions will be random
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = DeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Predicted class: magpie
DeiTForImageClassificationWithTeacher
class transformers.DeiTForImageClassificationWithTeacher
<
source
>
(
config: DeiTConfig
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of
the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet.
.. warning::
This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet
supported.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.deit.modeling_deit.DeiTForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.deit.modeling_deit.DeiTForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor)
A transformers.models.deit.modeling_deit.DeiTForImageClassificationWithTeacherOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DeiTConfig) and inputs.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits.
cls_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the
class token).
distillation_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the
distillation token).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The DeiTForImageClassificationWithTeacher forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, DeiTForImageClassificationWithTeacher
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = DeiTForImageClassificationWithTeacher.from_pretrained("facebook/deit-base-distilled-patch16-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFDeiTModel
class transformers.TFDeiTModel
<
source
>
(
*args
**kwargs
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare DeiT Model transformer outputting raw hidden-states without any specific head on top.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
bool_masked_pos: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DeiTConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDeiTModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFDeiTModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = TFDeiTModel.from_pretrained("facebook/deit-base-distilled-patch16-224")
inputs = image_processor(image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 198, 768]
TFDeiTForMaskedImageModeling
class transformers.TFDeiTForMaskedImageModeling
<
source
>
(
*args
**kwargs
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeiT Model with a decoder on top for masked image modeling, as proposed in SimMIM.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
bool_masked_pos: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMaskedImageModelingOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (tf.Tensor of type bool and shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.modeling_tf_outputs.TFMaskedImageModelingOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedImageModelingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DeiTConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when bool_masked_pos is provided) — Reconstruction loss.
reconstruction (tf.Tensor of shape (batch_size, num_channels, height, width)) — Reconstructed / completed images.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when
config.output_hidden_states=True):
Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called
feature maps) of the model at the output of each stage.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when
config.output_attentions=True):
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDeiTForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFDeiTForMaskedImageModeling
import tensorflow as tf
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = TFDeiTForMaskedImageModeling.from_pretrained("facebook/deit-base-distilled-patch16-224")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="tf").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = tf.cast(tf.random.uniform((1, num_patches), minval=0, maxval=2, dtype=tf.int32), tf.bool)
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
list(reconstructed_pixel_values.shape)
[1, 3, 224, 224]
TFDeiTForImageClassification
class transformers.TFDeiTForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of
the [CLS] token) e.g. for ImageNet.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
labels: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFImageClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DeiTConfig) and inputs.
loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for
the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called
feature maps) of the model at the output of each stage.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFDeiTForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFDeiTForImageClassification
import tensorflow as tf
from PIL import Image
import requests
tf.keras.utils.set_random_seed(3)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# note: we are loading a TFDeiTForImageClassificationWithTeacher from the hub here,
# so the head will be randomly initialized, hence the predictions will be random
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = TFDeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = tf.math.argmax(logits, axis=-1)[0]
print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
Predicted class: little blue heron, Egretta caerulea
TFDeiTForImageClassificationWithTeacher
class transformers.TFDeiTForImageClassificationWithTeacher
<
source
>
(
*args
**kwargs
)
Parameters
config (DeiTConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of
the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet.
.. warning::
This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet
supported.
This model is a TensorFlow
tf.keras.layers.Layer. Use it as a regular
TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior.
call
<
source
>
(
pixel_values: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacherOutput or tuple(tf.Tensor)
Parameters
pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DeiTImageProcessor.call() for details.
head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacherOutput or tuple(tf.Tensor)
A transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacherOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (DeiTConfig) and inputs.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits.
cls_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the
class token).
distillation_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the
distillation token).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TFDeiTForImageClassificationWithTeacher forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, TFDeiTForImageClassificationWithTeacher
import tensorflow as tf
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224")
model = TFDeiTForImageClassificationWithTeacher.from_pretrained("facebook/deit-base-distilled-patch16-224")
inputs = image_processor(image, return_tensors="tf")
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = int(tf.math.argmax(logits, axis=-1))
print(model.config.id2label[predicted_label])
tabby, tabby cat
←Deformable DETR
DETA→
DeiT
Overview
Resources
DeiTConfig
DeiTFeatureExtractor
DeiTImageProcessor
DeiTModel
DeiTForMaskedImageModeling
DeiTForImageClassification
DeiTForImageClassificationWithTeacher
TFDeiTModel
TFDeiTForMaskedImageModeling
TFDeiTForImageClassification
TFDeiTForImageClassificationWithTeacher
|
ViTMAE
Overview
The ViTMAE model was proposed in Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li,
Piotr Dollár, Ross Girshick. The paper shows that, by pre-training a Vision Transformer (ViT) to reconstruct pixel values for masked patches, one can get results after
fine-tuning that outperform supervised pre-training.
The abstract from the paper is the following:
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the
input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates
only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask
tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs
enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity
models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream
tasks outperforms supervised pre-training and shows promising scaling behavior.
Tips:
MAE (masked auto encoding) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is relatively simple:
by masking a large portion (75%) of the image patches, the model must reconstruct raw pixel values. One can use ViTMAEForPreTraining for this purpose.
After pre-training, one “throws away” the decoder used to reconstruct pixels, and one uses the encoder for fine-tuning/linear probing. This means that after
fine-tuning, one can directly plug in the weights into a ViTForImageClassification.
One can use ViTImageProcessor to prepare images for the model. See the code examples for more info.
Note that the encoder of MAE is only used to encode the visual patches. The encoded patches are then concatenated with mask tokens, which the decoder (which also
consists of Transformer blocks) takes as input. Each mask token is a shared, learned vector that indicates the presence of a missing patch to be predicted. Fixed
sin/cos position embeddings are added both to the input of the encoder and the decoder.
For a visual understanding of how MAEs work you can check out this post.
MAE architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by sayakpaul and
ariG23498 (equal contribution). The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMAE.
ViTMAEForPreTraining is supported by this example script, allowing you to pre-train the model from scratch/further pre-train the model on custom data.
A notebook that illustrates how to visualize reconstructed pixel values with ViTMAEForPreTraining can be found here.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTMAEConfig
class transformers.ViTMAEConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = 224
patch_size = 16
num_channels = 3
qkv_bias = True
decoder_num_attention_heads = 16
decoder_hidden_size = 512
decoder_num_hidden_layers = 8
decoder_intermediate_size = 2048
mask_ratio = 0.75
norm_pix_loss = False
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
decoder_num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the decoder.
decoder_hidden_size (int, optional, defaults to 512) —
Dimensionality of the decoder.
decoder_num_hidden_layers (int, optional, defaults to 8) —
Number of hidden layers in the decoder.
decoder_intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the decoder.
mask_ratio (float, optional, defaults to 0.75) —
The ratio of the number of masked tokens in the input sequence.
norm_pix_loss (bool, optional, defaults to False) —
Whether or not to train with normalized pixels (see Table 3 in the paper). Using normalized pixels improved
representation quality in the experiments of the authors.
This is the configuration class to store the configuration of a ViTMAEModel. It is used to instantiate an ViT
MAE model according to the specified arguments, defining the model architecture. Instantiating a configuration with
the defaults will yield a similar configuration to that of the ViT
facebook/vit-mae-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ViTMAEConfig, ViTMAEModel
# Initializing a ViT MAE vit-mae-base style configuration
configuration = ViTMAEConfig()
# Initializing a model (with random weights) from the vit-mae-base style configuration
model = ViTMAEModel(configuration)
# Accessing the model configuration
configuration = model.config
ViTMAEModel
class transformers.ViTMAEModel
<
source
>
(
config
)
Parameters
config (ViTMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViTMAE Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
noise: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.vit_mae.modeling_vit_mae.ViTMAEModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.vit_mae.modeling_vit_mae.ViTMAEModelOutput or tuple(torch.FloatTensor)
A transformers.models.vit_mae.modeling_vit_mae.ViTMAEModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTMAEConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
mask (torch.FloatTensor of shape (batch_size, sequence_length)) — Tensor indicating which patches are masked (1) and which are not (0).
ids_restore (torch.LongTensor of shape (batch_size, sequence_length)) — Tensor containing the original index of the (shuffled) masked patches.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The ViTMAEModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ViTMAEModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base")
model = ViTMAEModel.from_pretrained("facebook/vit-mae-base")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
ViTMAEForPreTraining
class transformers.ViTMAEForPreTraining
<
source
>
(
config
)
Parameters
config (ViTMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The ViTMAE Model transformer with the decoder on top for self-supervised pre-training.
Note that we provide a script to pre-train this model on custom data in our examples
directory.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
noise: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.vit_mae.modeling_vit_mae.ViTMAEForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.vit_mae.modeling_vit_mae.ViTMAEForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.vit_mae.modeling_vit_mae.ViTMAEForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ViTMAEConfig) and inputs.
loss (torch.FloatTensor of shape (1,)) — Pixel reconstruction loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, patch_size ** 2 * num_channels)) — Pixel reconstruction logits.
mask (torch.FloatTensor of shape (batch_size, sequence_length)) — Tensor indicating which patches are masked (1) and which are not (0).
ids_restore (torch.LongTensor of shape (batch_size, sequence_length)) — Tensor containing the original index of the (shuffled) masked patches.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The ViTMAEForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, ViTMAEForPreTraining
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base")
model = ViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
TFViTMAEModel
class transformers.TFViTMAEModel
<
source
>
(
*args
**kwargs
)
Parameters
config (ViTMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ViTMAE Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
noise: tf.Tensor = None
head_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.vit_mae.modeling_tf_vit_mae.TFViTMAEModelOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.vit_mae.modeling_tf_vit_mae.TFViTMAEModelOutput or tuple(tf.Tensor)
A transformers.models.vit_mae.modeling_tf_vit_mae.TFViTMAEModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ViTMAEConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
mask (tf.Tensor of shape (batch_size, sequence_length)) — Tensor indicating which patches are masked (1) and which are not (0).
ids_restore (tf.Tensor of shape (batch_size, sequence_length)) — Tensor containing the original index of the (shuffled) masked patches.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TFViTMAEModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFViTMAEModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base")
model = TFViTMAEModel.from_pretrained("facebook/vit-mae-base")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
TFViTMAEForPreTraining
class transformers.TFViTMAEForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (ViTMAEConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The ViTMAE Model transformer with the decoder on top for self-supervised pre-training.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
noise: tf.Tensor = None
head_mask: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.models.vit_mae.modeling_tf_vit_mae.TFViTMAEForPreTrainingOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used
in eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.vit_mae.modeling_tf_vit_mae.TFViTMAEForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.vit_mae.modeling_tf_vit_mae.TFViTMAEForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ViTMAEConfig) and inputs.
loss (tf.Tensor of shape (1,)) — Pixel reconstruction loss.
logits (tf.Tensor of shape (batch_size, sequence_length, patch_size ** 2 * num_channels)) — Pixel reconstruction logits.
mask (tf.Tensor of shape (batch_size, sequence_length)) — Tensor indicating which patches are masked (1) and which are not (0).
ids_restore (tf.Tensor of shape (batch_size, sequence_length)) — Tensor containing the original index of the (shuffled) masked patches.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus
the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TFViTMAEForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFViTMAEForPreTraining
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base")
model = TFViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
mask = outputs.mask
ids_restore = outputs.ids_restore
←ViT Hybrid
ViTMSN→
ViTMAE
Overview
Resources
ViTMAEConfig
ViTMAEModel
ViTMAEForPreTraining
TFViTMAEModel
TFViTMAEForPreTraining
|
mLUKE
Overview
The mLUKE model was proposed in mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It’s a multilingual extension
of the LUKE model trained on the basis of XLM-RoBERTa.
It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks
involving reasoning about entities such as named entity recognition, extractive question answering, relation
classification, cloze-style knowledge completion.
The abstract from the paper is the following:
Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual
alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining
and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging
entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages
with entity representations and show the model consistently outperforms word-based pretrained models in various
cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity
representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a
multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual
knowledge more likely than using only word representations.
One can directly plug in the weights of mLUKE into a LUKE model, like so:
Copied
from transformers import LukeModel
model = LukeModel.from_pretrained("studio-ousia/mluke-base")
Note that mLUKE has its own tokenizer, MLukeTokenizer. You can initialize it as follows:
Copied
from transformers import MLukeTokenizer
tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base")
As mLUKE’s architecture is equivalent to that of LUKE, one can refer to LUKE’s documentation page for all
tips, code examples and notebooks.
This model was contributed by ryo0634. The original code can be found here.
MLukeTokenizer
class transformers.MLukeTokenizer
<
source
>
(
vocab_file
entity_vocab_file
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
task = None
max_entity_length = 32
max_mention_length = 30
entity_token_1 = '<ent>'
entity_token_2 = '<ent2>'
entity_unk_token = '[UNK]'
entity_pad_token = '[PAD]'
entity_mask_token = '[MASK]'
entity_mask2_token = '[MASK2]'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
entity_vocab_file (str) —
Path to the entity vocabulary file.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
task (str, optional) —
Task for which you want to prepare sequences. One of "entity_classification",
"entity_pair_classification", or "entity_span_classification". If you specify this argument, the entity
sequence is automatically created based on the given entity span(s).
max_entity_length (int, optional, defaults to 32) —
The maximum length of entity_ids.
max_mention_length (int, optional, defaults to 30) —
The maximum number of tokens inside an entity span.
entity_token_1 (str, optional, defaults to <ent>) —
The special token used to represent an entity span in a word token sequence. This token is only used when
task is set to "entity_classification" or "entity_pair_classification".
entity_token_2 (str, optional, defaults to <ent2>) —
The special token used to represent an entity span in a word token sequence. This token is only used when
task is set to "entity_pair_classification".
additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) —
Additional special tokens used by the tokenizer.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Adapted from XLMRobertaTokenizer and LukeTokenizer. Based on
SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str]]
text_pair: typing.Union[str, typing.List[str], NoneType] = None
entity_spans: typing.Union[typing.List[typing.Tuple[int, int]], typing.List[typing.List[typing.Tuple[int, int]]], NoneType] = None
entity_spans_pair: typing.Union[typing.List[typing.Tuple[int, int]], typing.List[typing.List[typing.Tuple[int, int]]], NoneType] = None
entities: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
entities_pair: typing.Union[typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
max_entity_length: typing.Optional[int] = None
stride: int = 0
is_split_into_words: typing.Optional[bool] = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence must be a string. Note that this
tokenizer does not support tokenization based on pretokenized strings.
text_pair (str, List[str], List[List[str]]) —
The sequence or batch of sequences to be encoded. Each sequence must be a string. Note that this
tokenizer does not support tokenization based on pretokenized strings.
entity_spans (List[Tuple[int, int]], List[List[Tuple[int, int]]], optional) —
The sequence or batch of sequences of entity spans to be encoded. Each sequence consists of tuples each
with two integers denoting character-based start and end positions of entities. If you specify
"entity_classification" or "entity_pair_classification" as the task argument in the constructor,
the length of each sequence must be 1 or 2, respectively. If you specify entities, the length of each
sequence must be equal to the length of each sequence of entities.
entity_spans_pair (List[Tuple[int, int]], List[List[Tuple[int, int]]], optional) —
The sequence or batch of sequences of entity spans to be encoded. Each sequence consists of tuples each
with two integers denoting character-based start and end positions of entities. If you specify the
task argument in the constructor, this argument is ignored. If you specify entities_pair, the
length of each sequence must be equal to the length of each sequence of entities_pair.
entities (List[str], List[List[str]], optional) —
The sequence or batch of sequences of entities to be encoded. Each sequence consists of strings
representing entities, i.e., special entities (e.g., [MASK]) or entity titles of Wikipedia (e.g., Los
Angeles). This argument is ignored if you specify the task argument in the constructor. The length of
each sequence must be equal to the length of each sequence of entity_spans. If you specify
entity_spans without specifying this argument, the entity sequence or the batch of entity sequences
is automatically constructed by filling it with the [MASK] entity.
entities_pair (List[str], List[List[str]], optional) —
The sequence or batch of sequences of entities to be encoded. Each sequence consists of strings
representing entities, i.e., special entities (e.g., [MASK]) or entity titles of Wikipedia (e.g., Los
Angeles). This argument is ignored if you specify the task argument in the constructor. The length of
each sequence must be equal to the length of each sequence of entity_spans_pair. If you specify
entity_spans_pair without specifying this argument, the entity sequence or the batch of entity
sequences is automatically constructed by filling it with the [MASK] entity.
max_entity_length (int, optional) —
The maximum length of entity_ids.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
entity_ids — List of entity ids to be fed to a model.
What are input IDs?
entity_position_ids — List of entity positions in the input sequence to be fed to a model.
entity_token_type_ids — List of entity token type ids to be fed to a model (when
return_token_type_ids=True or if “entity_token_type_ids” is in self.model_input_names).
What are token type IDs?
entity_attention_mask — List of indices specifying which entities should be attended to by the model
(when return_attention_mask=True or if “entity_attention_mask” is in self.model_input_names).
What are attention masks?
entity_start_positions — List of the start positions of entities in the word token sequence (when
task="entity_span_classification").
entity_end_positions — List of the end positions of entities in the word token sequence (when
task="entity_span_classification").
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences, depending on the task you want to prepare them for.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
←MegatronGPT2
MobileBERT→
mLUKE
Overview
MLukeTokenizer
|
RemBERT
Overview
The RemBERT model was proposed in Rethinking Embedding Coupling in Pre-trained Language Models by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.
The abstract from the paper is the following:
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
allocating additional capacity to the output embedding provides benefits to the model that persist through the
fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger
output embeddings prevent the model’s last layers from overspecializing to the pre-training task and encourage
Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these
findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the
number of parameters at the fine-tuning stage.
Tips:
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
also similar to the Albert one rather than the BERT one.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RemBertConfig
class transformers.RemBertConfig
<
source
>
(
vocab_size = 250300
hidden_size = 1152
num_hidden_layers = 32
num_attention_heads = 18
input_embedding_size = 256
output_embedding_size = 1664
intermediate_size = 4608
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
classifier_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
use_cache = True
pad_token_id = 0
bos_token_id = 312
eos_token_id = 313
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 250300) —
Vocabulary size of the RemBERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling RemBertModel or TFRemBertModel. Vocabulary size of the model.
Defines the different tokens that can be represented by the inputs_ids passed to the forward method of
RemBertModel.
hidden_size (int, optional, defaults to 1152) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 32) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 18) —
Number of attention heads for each attention layer in the Transformer encoder.
input_embedding_size (int, optional, defaults to 256) —
Dimensionality of the input embeddings.
output_embedding_size (int, optional, defaults to 1664) —
Dimensionality of the output embeddings.
intermediate_size (int, optional, defaults to 4608) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0) —
The dropout ratio for the attention probabilities.
classifier_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the classifier layer when fine-tuning.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling RemBertModel or TFRemBertModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
This is the configuration class to store the configuration of a RemBertModel. It is used to instantiate an
RemBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the RemBERT
google/rembert architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import RemBertModel, RemBertConfig
# Initializing a RemBERT rembert style configuration
configuration = RemBertConfig()
# Initializing a model from the rembert style configuration
model = RemBertModel(configuration)
# Accessing the model configuration
configuration = model.config
RemBertTokenizer
class transformers.RemBertTokenizer
<
source
>
(
vocab_file
do_lower_case = False
remove_space = True
keep_accents = True
bos_token = '[CLS]'
eos_token = '[SEP]'
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
bos_token (str, optional, defaults to "[CLS]") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "[SEP]") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct a RemBERT tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A REMBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A RemBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
RemBertTokenizerFast
class transformers.RemBertTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
remove_space = True
keep_accents = False
bos_token = '[CLS]'
eos_token = '[SEP]'
unk_token = '<unk>'
sep_token = '[SEP]'
pad_token = '<pad>'
cls_token = '[CLS]'
mask_token = '[MASK]'
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to True) —
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) —
Whether or not to keep accents when tokenizing.
bos_token (str, optional, defaults to "[CLS]") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "[SEP]") —
The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token
that is used for the end of sequence. The token used is the sep_token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
Construct a “fast” RemBert tokenizer (backed by HuggingFace’s tokenizers library). Based on
Unigram. This
tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional, defaults to None) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A RemBERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional, defaults to None) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Set to True if the token list is already formatted with special tokens for the model
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of ids.
token_ids_1 (List[int], optional, defaults to None) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. A RemBERT
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
if token_ids_1 is None, only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
RemBertModel
class transformers.RemBertModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RemBERT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The RemBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RemBertModel
import torch
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertModel.from_pretrained("google/rembert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
RemBertForCausalLM
class transformers.RemBertForCausalLM
<
source
>
(
config
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a language modeling head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The RemBertForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RemBertForCausalLM, RemBertConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
config = RemBertConfig.from_pretrained("google/rembert")
config.is_decoder = True
model = RemBertForCausalLM.from_pretrained("google/rembert", config=config)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
RemBertForMaskedLM
class transformers.RemBertForMaskedLM
<
source
>
(
config
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.LongTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RemBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RemBertForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertForMaskedLM.from_pretrained("google/rembert")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
RemBertForSequenceClassification
class transformers.RemBertForSequenceClassification
<
source
>
(
config
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: FloatTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RemBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, RemBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertForSequenceClassification.from_pretrained("google/rembert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RemBertForSequenceClassification.from_pretrained("google/rembert", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, RemBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertForSequenceClassification.from_pretrained("google/rembert", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RemBertForSequenceClassification.from_pretrained(
... "google/rembert", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
RemBertForMultipleChoice
class transformers.RemBertForMultipleChoice
<
source
>
(
config
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: FloatTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RemBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RemBertForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertForMultipleChoice.from_pretrained("google/rembert")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
RemBertForTokenClassification
class transformers.RemBertForTokenClassification
<
source
>
(
config
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: FloatTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RemBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RemBertForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertForTokenClassification.from_pretrained("google/rembert")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
RemBertForQuestionAnswering
class transformers.RemBertForQuestionAnswering
<
source
>
(
config
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: FloatTensor = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RemBertConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RemBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RemBertForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = RemBertForQuestionAnswering.from_pretrained("google/rembert")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFRemBertModel
class transformers.TFRemBertModel
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RemBERT Model transformer outputing raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The TFRemBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRemBertModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertModel.from_pretrained("google/rembert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFRemBertForMaskedLM
class transformers.TFRemBertForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRemBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRemBertForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertForMaskedLM.from_pretrained("google/rembert")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFRemBertForCausalLM
class transformers.TFRemBertForCausalLM
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a language modeling head on top for CLM fine-tuning.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
encoder_hidden_states: np.ndarray | tf.Tensor | None = None
encoder_attention_mask: np.ndarray | tf.Tensor | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers)
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True):
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional):
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Example:
Copied
from transformers import AutoTokenizer, TFRemBertForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertForCausalLM.from_pretrained("google/rembert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFRemBertForSequenceClassification
class transformers.TFRemBertForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model transformer with a sequence classification/regression head on top e.g., for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRemBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRemBertForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertForSequenceClassification.from_pretrained("google/rembert")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFRemBertForSequenceClassification.from_pretrained("google/rembert", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFRemBertForMultipleChoice
class transformers.TFRemBertForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRemBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRemBertForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertForMultipleChoice.from_pretrained("google/rembert")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFRemBertForTokenClassification
class transformers.TFRemBertForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRemBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRemBertForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertForTokenClassification.from_pretrained("google/rembert")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFRemBertForQuestionAnswering
class transformers.TFRemBertForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (RemBertConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RemBertConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRemBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRemBertForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("google/rembert")
model = TFRemBertForQuestionAnswering.from_pretrained("google/rembert")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←Reformer
RetriBERT→
RemBERT
Overview
Documentation resources
RemBertConfig
RemBertTokenizer
RemBertTokenizerFast
RemBertModel
RemBertForCausalLM
RemBertForMaskedLM
RemBertForSequenceClassification
RemBertForMultipleChoice
RemBertForTokenClassification
RemBertForQuestionAnswering
TFRemBertModel
TFRemBertForMaskedLM
TFRemBertForCausalLM
TFRemBertForSequenceClassification
TFRemBertForMultipleChoice
TFRemBertForTokenClassification
TFRemBertForQuestionAnswering
|
Swin Transformer V2
Overview
The Swin Transformer V2 model was proposed in Swin Transformer V2: Scaling Up Capacity and Resolution by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
The abstract from the paper is the following:
Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google’s billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.
Tips:
One can use the AutoImageProcessor API to prepare images for the model.
This model was contributed by nandwalritik.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2.
Image Classification
Swinv2ForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
Besides that:
Swinv2ForMaskedImageModeling is supported by this example script.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Swinv2Config
class transformers.Swinv2Config
<
source
>
(
image_size = 224
patch_size = 4
num_channels = 3
embed_dim = 96
depths = [2, 2, 6, 2]
num_heads = [3, 6, 12, 24]
window_size = 7
mlp_ratio = 4.0
qkv_bias = True
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
drop_path_rate = 0.1
hidden_act = 'gelu'
use_absolute_embeddings = False
initializer_range = 0.02
layer_norm_eps = 1e-05
encoder_stride = 32
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 96) —
Dimensionality of patch embedding.
depths (list(int), optional, defaults to [2, 2, 6, 2]) —
Depth of each layer in the Transformer encoder.
num_heads (list(int), optional, defaults to [3, 6, 12, 24]) —
Number of attention heads in each layer of the Transformer encoder.
window_size (int, optional, defaults to 7) —
Size of windows.
mlp_ratio (float, optional, defaults to 4.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (bool, optional, defaults to True) —
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
use_absolute_embeddings (bool, optional, defaults to False) —
Whether or not to add absolute position embeddings to the patch embeddings.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
encoder_stride (int, optional, defaults to 32) —
Factor to increase the spatial resolution by in the decoder head for masked image modeling.
This is the configuration class to store the configuration of a Swinv2Model. It is used to instantiate a Swin
Transformer v2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Swin Transformer v2
microsoft/swinv2-tiny-patch4-window8-256
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Swinv2Config, Swinv2Model
# Initializing a Swinv2 microsoft/swinv2-tiny-patch4-window8-256 style configuration
configuration = Swinv2Config()
# Initializing a model (with random weights) from the microsoft/swinv2-tiny-patch4-window8-256 style configuration
model = Swinv2Model(configuration)
# Accessing the model configuration
configuration = model.config
Swinv2Model
class transformers.Swinv2Model
<
source
>
(
config
add_pooling_layer = True
use_mask_token = False
)
Parameters
config (Swinv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Swinv2 Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.swinv2.modeling_swinv2.Swinv2ModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.swinv2.modeling_swinv2.Swinv2ModelOutput or tuple(torch.FloatTensor)
A transformers.models.swinv2.modeling_swinv2.Swinv2ModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Swinv2Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The Swinv2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, Swinv2Model
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
model = Swinv2Model.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 64, 768]
Swinv2ForMaskedImageModeling
class transformers.Swinv2ForMaskedImageModeling
<
source
>
(
config
)
Parameters
config (Swinv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swinv2 Model with a decoder on top for masked image modeling, as proposed in
SimMIM.
Note that we provide a script to pre-train this model on custom data in our examples
directory.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.swinv2.modeling_swinv2.Swinv2MaskedImageModelingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.swinv2.modeling_swinv2.Swinv2MaskedImageModelingOutput or tuple(torch.FloatTensor)
A transformers.models.swinv2.modeling_swinv2.Swinv2MaskedImageModelingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Swinv2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Masked image modeling (MLM) loss.
reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed pixel values.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The Swinv2ForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, Swinv2ForMaskedImageModeling
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
model = Swinv2ForMaskedImageModeling.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction
list(reconstructed_pixel_values.shape)
[1, 3, 256, 256]
Swinv2ForImageClassification
class transformers.Swinv2ForImageClassification
<
source
>
(
config
)
Parameters
config (Swinv2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swinv2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state
of the [CLS] token) e.g. for ImageNet.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.swinv2.modeling_swinv2.Swinv2ImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call()
for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.swinv2.modeling_swinv2.Swinv2ImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.swinv2.modeling_swinv2.Swinv2ImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Swinv2Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The Swinv2ForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, Swinv2ForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
model = Swinv2ForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
Egyptian cat
←Swin Transformer
Swin2SR→
Swin Transformer V2
Overview
Resources
Swinv2Config
Swinv2Model
Swinv2ForMaskedImageModeling
Swinv2ForImageClassification
|
TVLT
Overview
The TVLT model was proposed in TVLT: Textless Vision-Language Transformer
by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc.
The abstract from the paper is the following:
In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text.
Tips:
TVLT is a model that takes both pixel_values and audio_values as input. One can use TvltProcessor to prepare data for the model.
This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one.
TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a pixel_mask that indicates which pixels are real/padding and audio_mask that indicates which audio values are real/padding.
The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in ViTMAE. The difference is that the model includes embedding layers for the audio modality.
The PyTorch version of this model is only available in torch 1.10 and higher.
TVLT architecture. Taken from the original paper.
The original code can be found here. This model was contributed by Zineng Tang.
TvltConfig
class transformers.TvltConfig
<
source
>
(
image_size = 224
spectrogram_length = 2048
frequency_length = 128
image_patch_size = [16, 16]
audio_patch_size = [16, 16]
num_image_channels = 3
num_audio_channels = 1
num_frames = 8
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-06
qkv_bias = True
use_mean_pooling = False
decoder_num_attention_heads = 16
decoder_hidden_size = 512
decoder_num_hidden_layers = 8
decoder_intermediate_size = 2048
pixel_mask_ratio = 0.75
audio_mask_ratio = 0.15
audio_mask_type = 'frame-level'
task_matching = True
task_mae = True
loss_type = 'classification'
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
spectrogram_length (int, optional, defaults to 2048) —
The time length of each audio spectrogram.
frequency_length (int, optional, defaults to 128) —
The frequency length of audio spectrogram.
image_patch_size (List[int], optional, defaults to [16, 16]) —
The size (resolution) of each image patch.
audio_patch_size (List[int], optional, defaults to [16, 16]) —
The size (resolution) of each audio patch.
num_image_channels (int, optional, defaults to 3) —
The number of input image channels.
num_audio_channels (int, optional, defaults to 1) —
The number of input audio channels.
num_frames (int, optional, defaults to 8) —
The maximum number of frames for an input video.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
use_mean_pooling (bool, optional, defaults to False) —
Whether to mean pool the final hidden states instead of using the final hidden state of the [CLS] token.
decoder_num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the decoder.
decoder_hidden_size (int, optional, defaults to 512) —
Dimensionality of the decoder.
decoder_num_hidden_layers (int, optional, defaults to 8) —
Number of hidden layers in the decoder.
decoder_intermediate_size (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the decoder.
pixel_mask_ratio (float, optional, defaults to 0.75) —
Image patch masking ratio.
audio_mask_ratio (float, optional, defaults to 0.15) —
Audio patch masking ratio.
audio_mask_type (str, optional, defaults to "frame-level") —
Audio patch masking type, choose between “frame-level” and “patch-level”.
task_matching (bool, optional, defaults to True) —
Whether to use vision audio matching task in pretraining.
task_mae (bool, optional, defaults to True) —
Whether to use the masked auto-encoder (MAE) in pretraining.
loss_type (str, optional, defaults to "classification") —
Loss types including regression and classification.
This is the configuration class to store the configuration of a TvltModel. It is used to instantiate a TVLT
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the TVLT
ZinengTang/tvlt-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import TvltConfig, TvltModel
# # Initializing a TVLT ZinengTang/tvlt-base style configuration
configuration = TvltConfig()
# # Initializing a model (with random weights) from the ZinengTang/tvlt-base style configuration
model = TvltModel(configuration)
# Accessing the model configuration
configuration = model.config
TvltProcessor
class transformers.TvltProcessor
<
source
>
(
image_processor
feature_extractor
)
Parameters
image_processor (TvltImageProcessor) —
An instance of TvltImageProcessor. The image processor is a required input.
feature_extractor (TvltFeatureExtractor) —
An instance of TvltFeatureExtractor. The feature extractor is a required input.
Constructs a TVLT processor which wraps a TVLT image processor and TVLT feature extractor into a single processor.
TvltProcessor offers all the functionalities of TvltImageProcessor and TvltFeatureExtractor. See the
docstring of call() for more information.
__call__
<
source
>
(
images = None
audio = None
images_mixed = None
sampling_rate = None
mask_audio = False
mask_pixel = False
*args
**kwargs
)
Forwards the images argument to TvltImageProcessor’s preprocess() and the audio
argument to TvltFeatureExtractor’s call(). Please refer to the docstring of the
above two methods for more information.
TvltImageProcessor
class transformers.TvltImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
patch_size: typing.List[int] = [16, 16]
num_frames: int = 8
resample: Resampling = <Resampling.BILINEAR: 2>
do_center_crop: bool = True
crop_size: typing.Dict[str, int] = None
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = [0.5, 0.5, 0.5]
image_std: typing.Union[float, typing.List[float], NoneType] = [0.5, 0.5, 0.5]
init_mask_generator = False
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the
do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}):
Size of the output image after resizing. The shortest edge of the image will be resized to
size["shortest_edge"] while maintaining the aspect ratio of the original image. Can be overriden by
size in the preprocess method.
patch_size (List[int] optional, defaults to [16,16]) —
The patch size of image patch embedding.
num_frames (int optional, defaults to 8) —
The maximum number of video frames.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the
preprocess method.
do_center_crop (bool, optional, defaults to True) —
Whether to center crop the image to the specified crop_size. Can be overridden by the do_center_crop
parameter in the preprocess method.
crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}):
Size of the image after applying the center crop. Can be overridden by the crop_size parameter in the
preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Defines the scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter
in the preprocess method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a TVLT image processor.
This processor can be used to prepare either videos or images for the model by converting images to 1-frame videos.
preprocess
<
source
>
(
videos: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
patch_size: typing.List[int] = None
num_frames: int = None
resample: Resampling = None
do_center_crop: bool = None
crop_size: typing.Dict[str, int] = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
is_mixed: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
→
BatchFeature
Parameters
videos (ImageInput) —
Images or videos to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after applying resize.
patch_size (List[int] optional, defaults to self.patch_size) —
The patch size of image patch embedding.
num_frames (int optional, defaults to self.num_frames) —
The maximum number of video frames.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only
has an effect if do_resize is set to True.
do_center_crop (bool, optional, defaults to self.do_centre_crop) —
Whether to centre crop the image.
crop_size (Dict[str, int], optional, defaults to self.crop_size) —
Size of the image after applying the centre crop.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
is_mixed (bool, optional) —
If the input video has negative samples.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the inferred channel dimension format of the input image.
Returns
BatchFeature
A BatchFeature with the following fields:
pixel_values — Pixel values to be fed to a model, of shape (batch_size, num_channels, height,
width).
pixel_mask — Pixel masks to be fed to a model, of shape (batch_size, num_pixel_patches).
pixel_values_mixed — Pixel values with both postive or negative to be fed to a model, of shape
(batch_size, num_channels, height, width).
pixel_mask_mixed — Pixel masks with both postive or negative to be fed to a model, of shape
(batch_size, num_pixel_patches).
Preprocess an videos or image or batch of videos or images.
TvltFeatureExtractor
class transformers.TvltFeatureExtractor
<
source
>
(
spectrogram_length = 2048
num_channels = 1
patch_size = [16, 16]
feature_size = 128
sampling_rate = 44100
hop_length_to_sampling_rate = 86
n_fft = 2048
padding_value = 0.0
**kwargs
)
Parameters
spectrogram_length (Dict[str, int] optional, defaults to 2048) —
The time length of each audio spectrogram.
num_channels (int optional, defaults to 1) —
Number of audio channels.
patch_size (List[int] optional, defaults to [16, 16]) —
The patch size of audio patch embedding.
feature_size (int, defaults to 128) —
The frequency length of audio spectrogram.
sampling_rate (int, defaults to 44100) —
The sampling rate at which the audio files should be digitalized expressed in Hertz (Hz).
hop_length_to_sampling_rate (int, defaults to 86) —
Hop length is length of the overlaping windows for the STFT used to obtain the Mel Frequency coefficients.
For example, with sampling rate 44100, the hop length is 512, with 44100 / 512 = 86
n_fft (int, defaults to 2048) —
Size of the Fourier transform.
padding_value (float, optional, defaults to 0.0) —
Padding value used to pad the audio. Should correspond to silences.
Constructs a TVLT audio feature extractor. This feature extractor can be used to prepare audios for the model.
This feature extractor inherits from FeatureExtractionMixin which contains most of the main methods. Users
should refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
raw_speech: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_attention_mask: typing.Optional[bool] = True
sampling_rate: typing.Optional[int] = None
resample: bool = False
mask_audio: bool = False
**kwargs
)
→
BatchFeature
Parameters
raw_speech (np.ndarray, List[float], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
stereo, i.e. single float per timestep.
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_attention_mask (bool, optional, default to True) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor’s default. What are attention masks?
For TvltTransformer models, attention_mask should alwys be passed for batched inference, to avoid
subtle bugs.
sampling_rate (int, optional) —
The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors and allow automatic speech recognition
pipeline. Current model supports sampling rate 16000 and 44100.
resample (bool, optional, defaults to False) —
If the sampling rate is not matched, resample the input audio to match.
mask_audio (bool, optional, defaults to False) —
Whether or not to mask input audio for MAE task.
Returns
BatchFeature
A BatchFeature with the following fields:
audio_values — Audio values to be fed to a model, of shape (batch_size, num_channels, height,
width).
audio_mask — Audio masks to be fed to a model, of shape (batch_size, num_audio_patches).
Main method to prepare one or several audio(s) for the model.
TvltModel
class transformers.TvltModel
<
source
>
(
config
)
Parameters
config (TvltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare TVLT Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
audio_values
pixel_mask = None
audio_mask = None
mask_pixel = False
mask_audio = False
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.tvlt.modeling_tvlt.TvltModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
audio_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Audio values. Audio values can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
pixel_mask (torch.FloatTensor of shape (batch_size, num_pixel_patches)) —
Pixel masks. Pixel masks can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
audio_mask (torch.FloatTensor of shape (batch_size, num_audio_patches)) —
Audio masks. Audio masks can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
pixel_values_mixed (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values that mix positive and negative samples in Tvlt vision-audio matching. Pixel values mixed can
be obtained using TvltProcessor. See TvltProcessor.call() for details.
pixel_mask_mixed (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel masks of pixel_values_mixed. Pixel masks mixed can be obtained using TvltProcessor. See
TvltProcessor.call() for details.
mask_pixel (bool, optional) —
Whether to mask pixel for MAE tasks. Only set to True in TvltForPreTraining.
mask_audio (bool, optional) —
Whether to mask audio for MAE tasks. Only set to True in TvltForPreTraining.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.tvlt.modeling_tvlt.TvltModelOutput or tuple(torch.FloatTensor)
A transformers.models.tvlt.modeling_tvlt.TvltModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TvltConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
last_pixel_hidden_state (torch.FloatTensor of shape (batch_size, pixel_sequence_length, hidden_size)) — Pixel sequence of hidden-states at the output of the last layer of the model.
last_audio_hidden_state (torch.FloatTensor of shape (batch_size, audio_sequence_length, hidden_size)) — Audio sequence of hidden-states at the output of the last layer of the model.
pixel_label_masks (torch.FloatTensor of shape (batch_size, pixel_patch_length)) — Tensor indicating which pixel patches are masked (1) and which are not (0).
audio_label_masks (torch.FloatTensor of shape (batch_size, audio_patch_length)) — Tensor indicating which audio patches are masked (1) and which are not (0).
pixel_ids_restore (torch.LongTensor of shape (batch_size, pixel_patch_length)) — Tensor containing the ids permutation of pixel masking.
audio_ids_restore (torch.LongTensor of shape (batch_size, audio_patch_length)) — Tensor containing the ids permutation of audio masking.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TvltModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TvltProcessor, TvltModel
import numpy as np
import torch
num_frames = 8
images = list(np.random.randn(num_frames, 3, 224, 224))
audio = list(np.random.randn(10000))
processor = TvltProcessor.from_pretrained("ZinengTang/tvlt-base")
model = TvltModel.from_pretrained("ZinengTang/tvlt-base")
input_dict = processor(images, audio, sampling_rate=44100, return_tensors="pt")
outputs = model(**input_dict)
loss = outputs.loss
TvltForPreTraining
class transformers.TvltForPreTraining
<
source
>
(
config
)
Parameters
config (TvltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The TVLT Model transformer with the decoder on top for self-supervised pre-training.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
audio_values
pixel_mask = None
audio_mask = None
labels = None
pixel_values_mixed = None
pixel_mask_mixed = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
→
transformers.models.tvlt.modeling_tvlt.TvltForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
audio_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Audio values. Audio values can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
pixel_mask (torch.FloatTensor of shape (batch_size, num_pixel_patches)) —
Pixel masks. Pixel masks can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
audio_mask (torch.FloatTensor of shape (batch_size, num_audio_patches)) —
Audio masks. Audio masks can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
pixel_values_mixed (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values that mix positive and negative samples in Tvlt vision-audio matching. Pixel values mixed can
be obtained using TvltProcessor. See TvltProcessor.call() for details.
pixel_mask_mixed (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel masks of pixel_values_mixed. Pixel masks mixed can be obtained using TvltProcessor. See
TvltProcessor.call() for details.
mask_pixel (bool, optional) —
Whether to mask pixel for MAE tasks. Only set to True in TvltForPreTraining.
mask_audio (bool, optional) —
Whether to mask audio for MAE tasks. Only set to True in TvltForPreTraining.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
pixel_values_mixed (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values that mix positive and negative samples in Tvlt vision-audio matching. Audio values can be
obtained using TvltProcessor. See TvltProcessor.call() for details.
pixel_mask_mixed (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel masks of pixel_values_mixed. Pixel values mixed can be obtained using TvltProcessor. See
TvltProcessor.call() for details.
labels (torch.LongTensor of shape (batch_size, num_labels), optional) —
Labels for computing the vision audio matching loss. Indices should be in [0, 1]. num_labels has to be 1.
Returns
transformers.models.tvlt.modeling_tvlt.TvltForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.tvlt.modeling_tvlt.TvltForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TvltConfig) and inputs.
loss (torch.FloatTensor of shape (1,)) — Pixel reconstruction loss.
matching_logits (torch.FloatTensor of shape (batch_size, 1)) — Matching objective logits.
pixel_logits (torch.FloatTensor of shape
(batch_size, pixel_patch_length, image_patch_size ** 3 * pixel_num_channels)): Pixel reconstruction
logits.
audio_logits (torch.FloatTensor of shape
(batch_size, audio_patch_length, image_patch_size[0] * image_patch_size[1])): Audio reconstruction
logits.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings and one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer
plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The TvltForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TvltProcessor, TvltForPreTraining
import numpy as np
import torch
num_frames = 8
images = list(np.random.randn(num_frames, 3, 224, 224))
images_mixed = list(np.random.randn(num_frames, 3, 224, 224))
audio = list(np.random.randn(10000))
processor = TvltProcessor.from_pretrained("ZinengTang/tvlt-base")
model = TvltForPreTraining.from_pretrained("ZinengTang/tvlt-base")
input_dict = processor(
... images, audio, images_mixed, sampling_rate=44100, mask_pixel=True, mask_audio=True, return_tensors="pt"
... )
outputs = model(**input_dict)
loss = outputs.loss
TvltForAudioVisualClassification
class transformers.TvltForAudioVisualClassification
<
source
>
(
config
)
Parameters
config (TvltConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Tvlt Model transformer with a classifier head on top (an MLP on top of the final hidden state of the [CLS] token)
for audiovisual classification tasks, e.g. CMU-MOSEI Sentiment Analysis and Audio to Video Retrieval.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
audio_values
pixel_mask = None
audio_mask = None
output_attentions = None
output_hidden_states = None
return_dict = None
labels = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
audio_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Audio values. Audio values can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
pixel_mask (torch.FloatTensor of shape (batch_size, num_pixel_patches)) —
Pixel masks. Pixel masks can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
audio_mask (torch.FloatTensor of shape (batch_size, num_audio_patches)) —
Audio masks. Audio masks can be obtained using TvltProcessor. See TvltProcessor.call() for
details.
pixel_values_mixed (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) —
Pixel values that mix positive and negative samples in Tvlt vision-audio matching. Pixel values mixed can
be obtained using TvltProcessor. See TvltProcessor.call() for details.
pixel_mask_mixed (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel masks of pixel_values_mixed. Pixel masks mixed can be obtained using TvltProcessor. See
TvltProcessor.call() for details.
mask_pixel (bool, optional) —
Whether to mask pixel for MAE tasks. Only set to True in TvltForPreTraining.
mask_audio (bool, optional) —
Whether to mask audio for MAE tasks. Only set to True in TvltForPreTraining.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, num_labels), optional) —
Labels for computing the audiovisual loss. Indices should be in [0, ..., num_classes-1] where num_classes
refers to the number of classes in audiovisual tasks.
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (TvltConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TvltForAudioVisualClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import TvltProcessor, TvltForAudioVisualClassification
import numpy as np
import torch
num_frames = 8
images = list(np.random.randn(num_frames, 3, 224, 224))
audio = list(np.random.randn(10000))
processor = TvltProcessor.from_pretrained("ZinengTang/tvlt-base")
model = TvltForAudioVisualClassification.from_pretrained("ZinengTang/tvlt-base")
input_dict = processor(images, audio, sampling_rate=44100, return_tensors="pt")
outputs = model(**input_dict)
loss = outputs.loss
←TrOCR
ViLT→
TVLT
Overview
TvltConfig
TvltProcessor
TvltImageProcessor
TvltFeatureExtractor
TvltModel
TvltForPreTraining
TvltForAudioVisualClassification
|
BertGeneration
Overview
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using
EncoderDecoderModel as proposed in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
The abstract from the paper is the following:
Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By
warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple
benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language
Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We
developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT,
GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both
encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation,
Text Summarization, Sentence Splitting, and Sentence Fusion.
Usage:
The model can be used in combination with the EncoderDecoderModel to leverage two pretrained
BERT checkpoints for subsequent fine-tuning.
Copied
# leverage checkpoints for Bert2Bert model...
# use BERT's cls token as BOS token and sep token as EOS token
encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102)
# add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
decoder = BertGenerationDecoder.from_pretrained(
... "bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102
... )
bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder)
# create tokenizer...
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
input_ids = tokenizer(
... "This is a long article to summarize", add_special_tokens=False, return_tensors="pt"
... ).input_ids
labels = tokenizer("This is a short summary", return_tensors="pt").input_ids
# train...
loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
Pretrained EncoderDecoderModel are also directly available in the model hub, e.g.,
Copied
# instantiate sentence fusion model
sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
... "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt"
... ).input_ids
outputs = sentence_fuser.generate(input_ids)
print(tokenizer.decode(outputs[0]))
Tips:
BertGenerationEncoder and BertGenerationDecoder should be used in
combination with EncoderDecoder.
For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input.
Therefore, no EOS token should be added to the end of the input.
This model was contributed by patrickvonplaten. The original code can be
found here.
BertGenerationConfig
class transformers.BertGenerationConfig
<
source
>
(
vocab_size = 50358
hidden_size = 1024
num_hidden_layers = 24
num_attention_heads = 16
intermediate_size = 4096
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
bos_token_id = 2
eos_token_id = 1
position_embedding_type = 'absolute'
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50358) —
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling BertGeneration.
hidden_size (int, optional, defaults to 1024) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 24) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often called feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
This is the configuration class to store the configuration of a BertGenerationPreTrainedModel. It is used to
instantiate a BertGeneration model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the BertGeneration
google/bert_for_seq_generation_L-24_bbc_encoder
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import BertGenerationConfig, BertGenerationEncoder
# Initializing a BertGeneration config
configuration = BertGenerationConfig()
# Initializing a model (with random weights) from the config
model = BertGenerationEncoder(configuration)
# Accessing the model configuration
configuration = model.config
BertGenerationTokenizer
class transformers.BertGenerationTokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
sep_token = '<::::>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
bos_token (str, optional, defaults to "<s>") —
The begin of sequence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
Construct a BertGeneration tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
BertGenerationEncoder
class transformers.BertGenerationEncoder
<
source
>
(
config
)
Parameters
config (BertGenerationConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BertGeneration model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
This model should be used when leveraging Bert or Roberta checkpoints for the EncoderDecoderModel class as
described in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
by Sascha Rothe, Shashi Narayan, and Aliaksei Severyn.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertGenerationConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The BertGenerationEncoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertGenerationEncoder
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
model = BertGenerationEncoder.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
BertGenerationDecoder
class transformers.BertGenerationDecoder
<
source
>
(
config
)
Parameters
config (BertGenerationConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BertGeneration Model with a language modeling head on top for CLM fine-tuning.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BertGenerationConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The BertGenerationDecoder forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BertGenerationDecoder, BertGenerationConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
config = BertGenerationConfig.from_pretrained("google/bert_for_seq_generation_L-24_bbc_encoder")
config.is_decoder = True
model = BertGenerationDecoder.from_pretrained(
... "google/bert_for_seq_generation_L-24_bbc_encoder", config=config
... )
inputs = tokenizer("Hello, my dog is cute", return_token_type_ids=False, return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
←BERT
BertJapanese→
BertGeneration
Overview
BertGenerationConfig
BertGenerationTokenizer
BertGenerationEncoder
BertGenerationDecoder
|
GPT-Sw3
Overview
The GPT-Sw3 model was first proposed in
Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish
by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman,
Fredrik Carlsson, Magnus Sahlgren.
Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile.
GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden
in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing
320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a
causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
This model was contributed by AI Sweden.
The implementation uses the GPT2Model coupled
with our GPTSw3Tokenizer. This means that AutoTokenizer and AutoModelForCausalLM map to our tokenizer
implementation and the corresponding GPT2 model implementation respectively.
Note that sentencepiece is required to use our tokenizer and can be installed with: pip install transformers[sentencepiece] or pip install sentencepiece
Example usage:
Copied
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AI-Sweden/gpt-sw3-356m")
model = AutoModelForCausalLM.from_pretrained("AI-Sweden/gpt-sw3-356m")
input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"]
generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0]
print(tokenizer.decode(generated_token_ids))
Träd är fina för att de är färgstarka. Men ibland är det fint
Documentation resources
Text classification task guide
Token classification task guide
Causal language modeling task guide
GPTSw3Tokenizer
class transformers.GPTSw3Tokenizer
<
source
>
(
vocab_file
do_lower_case = False
remove_space = False
keep_accents = False
pad_token = None
unk_token = None
eos_token = None
bos_token = None
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
do_lower_case (bool, optional, defaults to False) —
Whether or not to lowercase the input when tokenizing.
remove_space (bool, optional, defaults to False) —
Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).
keep_accents (bool, optional, defaults to False) —
Whether or not to keep accents when tokenizing.
bos_token (str, optional) —
The beginning of sequence token that can be used for downstream task, was not seen during pretraining. If
not provided, will default to ’’ or ’<|endoftext|>’, depending on model size.
eos_token (str, optional) —
The end of sequence token seen during pretraining. If not provided, will default to ’<|endoftext|>’
unk_token (str, optional) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. If not provided, will default to ’‘.
pad_token (str, optional) —
The token used for padding, for example when batching sequences of different lengths. If not provided, will
default to ’’ or ’’ depending on model size.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
whitespaces (set) —
The whitespaces that are replaced in the whitespace normalization in preprocessing.
non_printing_characters_re (Pattern) —
The compiled regular expression to remove non-printing characters in preprocessing.
Construct an GPTSw3 tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Example usage:
Copied
from transformers import GPTSw3Tokenizer
tokenizer = GPTSw3Tokenizer.from_pretrained("AI-Sweden/gpt-sw3-126m")
tokenizer("Svenska är kul!")["input_ids"]
[1814, 377, 3617, 63504]
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
←GPTSAN Japanese
HerBERT→
GPT-Sw3
Overview
Documentation resources
GPTSw3Tokenizer
|
Decision Transformer
Overview
The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling
by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
The abstract from the paper is the following:
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem.
This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances
in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that
casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked
Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our
Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity,
Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on
Atari, OpenAI Gym, and Key-to-Door tasks.
Tips:
This version of the model is for tasks where the state is a vector, image-based states will come soon.
This model was contributed by edbeeching. The original code can be found here.
DecisionTransformerConfig
class transformers.DecisionTransformerConfig
<
source
>
(
state_dim = 17
act_dim = 4
hidden_size = 128
max_ep_len = 4096
action_tanh = True
vocab_size = 1
n_positions = 1024
n_layer = 3
n_head = 1
n_inner = None
activation_function = 'relu'
resid_pdrop = 0.1
embd_pdrop = 0.1
attn_pdrop = 0.1
layer_norm_epsilon = 1e-05
initializer_range = 0.02
scale_attn_weights = True
use_cache = True
bos_token_id = 50256
eos_token_id = 50256
scale_attn_by_inverse_layer_idx = False
reorder_and_upcast_attn = False
**kwargs
)
Parameters
state_dim (int, optional, defaults to 17) —
The state size for the RL environment
act_dim (int, optional, defaults to 4) —
The size of the output action space
hidden_size (int, optional, defaults to 128) —
The size of the hidden layers
max_ep_len (int, optional, defaults to 4096) —
The maximum length of an episode in the environment
action_tanh (bool, optional, defaults to True) —
Whether to use a tanh activation on action prediction
vocab_size (int, optional, defaults to 50257) —
Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling DecisionTransformerModel.
n_positions (int, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_layer (int, optional, defaults to 3) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 1) —
Number of attention heads for each attention layer in the Transformer encoder.
n_inner (int, optional) —
Dimensionality of the inner feed-forward layers. If unset, will default to 4 times n_embd.
activation_function (str, optional, defaults to "gelu") —
Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"].
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (int, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
attn_pdrop (float, optional, defaults to 0.1) —
The dropout ratio for the attention.
layer_norm_epsilon (float, optional, defaults to 1e-5) —
The epsilon to use in the layer normalization layers.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
scale_attn_weights (bool, optional, defaults to True) —
Scale attention weights by dividing by sqrt(hidden_size)..
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
scale_attn_by_inverse_layer_idx (bool, optional, defaults to False) —
Whether to additionally scale attention weights by 1 / layer_idx + 1.
reorder_and_upcast_attn (bool, optional, defaults to False) —
Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention
dot-product/softmax to float() when training with mixed precision.
This is the configuration class to store the configuration of a DecisionTransformerModel. It is used to
instantiate a Decision Transformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the standard
DecisionTransformer architecture. Many of the config options are used to instatiate the GPT2 model that is used as
part of the architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import DecisionTransformerConfig, DecisionTransformerModel
# Initializing a DecisionTransformer configuration
configuration = DecisionTransformerConfig()
# Initializing a model (with random weights) from the configuration
model = DecisionTransformerModel(configuration)
# Accessing the model configuration
configuration = model.config
DecisionTransformerGPT2Model
class transformers.DecisionTransformerGPT2Model
<
source
>
(
config
)
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.Tensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
DecisionTransformerModel
class transformers.DecisionTransformerModel
<
source
>
(
config
)
Parameters
config (~DecisionTransformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Decision Transformer Model
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model builds upon the GPT2 architecture to perform autoregressive prediction of actions in an offline RL
setting. Refer to the paper for more details: https://arxiv.org/abs/2106.01345
forward
<
source
>
(
states = None
actions = None
rewards = None
returns_to_go = None
timesteps = None
attention_mask = None
output_hidden_states = None
output_attentions = None
return_dict = None
)
→
transformers.models.decision_transformer.modeling_decision_transformer.DecisionTransformerOutput or tuple(torch.FloatTensor)
Parameters
states (torch.FloatTensor of shape (batch_size, episode_length, state_dim)) —
The states for each step in the trajectory
actions (torch.FloatTensor of shape (batch_size, episode_length, act_dim)) —
The actions taken by the “expert” policy for the current state, these are masked for auto regressive
prediction
rewards (torch.FloatTensor of shape (batch_size, episode_length, 1)) —
The rewards for each state, action
returns_to_go (torch.FloatTensor of shape (batch_size, episode_length, 1)) —
The returns for each state in the trajectory
timesteps (torch.LongTensor of shape (batch_size, episode_length)) —
The timestep for each step in the trajectory
attention_mask (torch.LongTensor of shape (batch_size, episode_length)) —
Masking, used to mask the actions when performing autoregressive prediction
Returns
transformers.models.decision_transformer.modeling_decision_transformer.DecisionTransformerOutput or tuple(torch.FloatTensor)
A transformers.models.decision_transformer.modeling_decision_transformer.DecisionTransformerOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (DecisionTransformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
state_preds (torch.FloatTensor of shape (batch_size, sequence_length, state_dim)) — Environment state predictions
action_preds (torch.FloatTensor of shape (batch_size, sequence_length, action_dim)) — Model action predictions
return_preds (torch.FloatTensor of shape (batch_size, sequence_length, 1)) — Predicted returns for each state
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The DecisionTransformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import DecisionTransformerModel
import torch
model = DecisionTransformerModel.from_pretrained("edbeeching/decision-transformer-gym-hopper-medium")
# evaluation
model = model.to(device)
model.eval()
env = gym.make("Hopper-v3")
state_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
state = env.reset()
states = torch.from_numpy(state).reshape(1, 1, state_dim).to(device=device, dtype=torch.float32)
actions = torch.zeros((1, 1, act_dim), device=device, dtype=torch.float32)
rewards = torch.zeros(1, 1, device=device, dtype=torch.float32)
target_return = torch.tensor(TARGET_RETURN, dtype=torch.float32).reshape(1, 1)
timesteps = torch.tensor(0, device=device, dtype=torch.long).reshape(1, 1)
attention_mask = torch.zeros(1, 1, device=device, dtype=torch.float32)
# forward pass
with torch.no_grad():
... state_preds, action_preds, return_preds = model(
... states=states,
... actions=actions,
... rewards=rewards,
... returns_to_go=target_return,
... timesteps=timesteps,
... attention_mask=attention_mask,
... return_dict=False,
... )
←X-CLIP
Trajectory Transformer→
Decision Transformer
Overview
DecisionTransformerConfig
DecisionTransformerGPT2Model
DecisionTransformerModel
|
ConvNeXT
Overview
The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
The abstract from the paper is the following:
The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers
(e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide
variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive
biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design
of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models
dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy
and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.
Tips:
See the code examples below each model regarding usage.
ConvNeXT architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by ariG23498,
gante, and sayakpaul (equal contribution). The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT.
Image Classification
ConvNextForImageClassification is supported by this example script and notebook.
See also: Image classification task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextConfig
class transformers.ConvNextConfig
<
source
>
(
num_channels = 3
patch_size = 4
num_stages = 4
hidden_sizes = None
depths = None
hidden_act = 'gelu'
initializer_range = 0.02
layer_norm_eps = 1e-12
layer_scale_init_value = 1e-06
drop_path_rate = 0.0
image_size = 224
out_features = None
out_indices = None
**kwargs
)
Parameters
num_channels (int, optional, defaults to 3) —
The number of input channels.
patch_size (int, optional, defaults to 4) —
Patch size to use in the patch embedding layer.
num_stages (int, optional, defaults to 4) —
The number of stages in the model.
hidden_sizes (List[int], optional, defaults to [96, 192, 384, 768]) —
Dimensionality (hidden size) at each stage.
depths (List[int], optional, defaults to [3, 3, 9, 3]) —
Depth (number of blocks) for each stage.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in each block. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
layer_scale_init_value (float, optional, defaults to 1e-6) —
The initial value for the layer scale.
drop_path_rate (float, optional, defaults to 0.0) —
The drop rate for stochastic depth.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a ConvNextModel. It is used to instantiate an
ConvNeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the ConvNeXT
facebook/convnext-tiny-224 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import ConvNextConfig, ConvNextModel
# Initializing a ConvNext convnext-tiny-224 style configuration
configuration = ConvNextConfig()
# Initializing a model (with random weights) from the convnext-tiny-224 style configuration
model = ConvNextModel(configuration)
# Accessing the model configuration
configuration = model.config
ConvNextFeatureExtractor
class transformers.ConvNextFeatureExtractor
<
source
>
(
*args
**kwargs
)
ConvNextImageProcessor
class transformers.ConvNextImageProcessor
<
source
>
(
do_resize: bool = True
size: typing.Dict[str, int] = None
crop_pct: float = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
**kwargs
)
Parameters
do_resize (bool, optional, defaults to True) —
Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overriden
by do_resize in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 384}):
Resolution of the output image after resize is applied. If size["shortest_edge"] >= 384, the image is
resized to (size["shortest_edge"], size["shortest_edge"]). Otherwise, the smaller edge of the image will
be matched to int(size["shortest_edge"]/crop_pct), after which the image is cropped to
(size["shortest_edge"], size["shortest_edge"]). Only has an effect if do_resize is set to True. Can
be overriden by size in the preprocess method.
crop_pct (float optional, defaults to 224 / 256) —
Percentage of the image to crop. Only has an effect if do_resize is True and size < 384. Can be
overriden by crop_pct in the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image. Can be overriden by resample in the preprocess method.
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overriden by do_rescale in
the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overriden by rescale_factor in the preprocess
method.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method.
image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) —
Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
number of channels in the image. Can be overridden by the image_std parameter in the preprocess method.
Constructs a ConvNeXT image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_resize: bool = None
size: typing.Dict[str, int] = None
crop_pct: float = None
resample: Resampling = None
do_rescale: bool = None
rescale_factor: float = None
do_normalize: bool = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the output image after resize has been applied. If size["shortest_edge"] >= 384, the image
is resized to (size["shortest_edge"], size["shortest_edge"]). Otherwise, the smaller edge of the
image will be matched to int(size["shortest_edge"]/ crop_pct), after which the image is cropped to
(size["shortest_edge"], size["shortest_edge"]). Only has an effect if do_resize is set to True.
crop_pct (float, optional, defaults to self.crop_pct) —
Percentage of the image to crop if size < 384.
resample (int, optional, defaults to self.resample) —
Resampling filter to use if resizing the image. This can be one of PILImageResampling, filters. Only
has an effect if do_resize is set to True.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Image mean.
image_std (float or List[float], optional, defaults to self.image_std) —
Image standard deviation.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
ChannelDimension.FIRST: image in (num_channels, height, width) format.
ChannelDimension.LAST: image in (height, width, num_channels) format.
Preprocess an image or batch of images.
ConvNextModel
class transformers.ConvNextModel
<
source
>
(
config
)
Parameters
config (ConvNextConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ConvNext model outputting raw features without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvNextConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, num_channels, height, width).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The ConvNextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ConvNextModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
model = ConvNextModel.from_pretrained("facebook/convnext-tiny-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 768, 7, 7]
ConvNextForImageClassification
class transformers.ConvNextForImageClassification
<
source
>
(
config
)
Parameters
config (ConvNextConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (ConvNextConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also
called feature maps) of the model at the output of each stage.
The ConvNextForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
TFConvNextModel
class transformers.TFConvNextModel
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvNextConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare ConvNext model outputting raw features without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvNextConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvNextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFConvNextModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
model = TFConvNextModel.from_pretrained("facebook/convnext-tiny-224")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
TFConvNextForImageClassification
class transformers.TFConvNextForImageClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (ConvNextConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for
ImageNet.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with pixel_values only and nothing else: model(pixel_values)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"pixel_values": pixel_values, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
pixel_values: TFModelInputType | None = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
ConvNextImageProcessor.call() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (ConvNextConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFConvNextForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, TFConvNextForImageClassification
import tensorflow as tf
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224")
model = TFConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")
inputs = image_processor(images=image, return_tensors="tf")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = tf.math.argmax(logits, axis=-1)[0]
print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
←Conditional DETR
ConvNeXTV2→
ConvNeXT
Overview
Resources
ConvNextConfig
ConvNextFeatureExtractor
ConvNextImageProcessor
ConvNextModel
ConvNextForImageClassification
TFConvNextModel
TFConvNextForImageClassification
|
Swin2SR
Overview
The Swin2SR model was proposed in Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
Swin2R improves the SwinIR model by incorporating Swin Transformer v2 layers which mitigates issues such as training instability, resolution gaps between pre-training
and fine-tuning, and hunger on data.
The abstract from the paper is the following:
Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks.
In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the “AIM 2022 Challenge on Super-Resolution of Compressed Image and Video”.
Swin2SR architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
Demo notebooks for Swin2SR can be found here.
A demo Space for image super-resolution with SwinSR can be found here.
Swin2SRImageProcessor
class transformers.Swin2SRImageProcessor
<
source
>
(
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_pad: bool = True
pad_size: int = 8
**kwargs
)
Parameters
do_rescale (bool, optional, defaults to True) —
Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale
parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
Constructs a Swin2SR image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Optional[float] = None
do_pad: typing.Optional[bool] = None
pad_size: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image values between [0 - 1].
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to rescale the image by if do_rescale is set to True.
do_pad (bool, optional, defaults to True) —
Whether to pad the image to make the height and width divisible by window_size.
pad_size (int, optional, defaults to 32) —
The size of the sliding window for the local attention.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) —
The channel dimension format for the output image. Can be one of:
"channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format.
"channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format.
Unset: Use the channel dimension format of the input image.
Preprocess an image or batch of images.
Swin2SRConfig
class transformers.Swin2SRConfig
<
source
>
(
image_size = 64
patch_size = 1
num_channels = 3
embed_dim = 180
depths = [6, 6, 6, 6, 6, 6]
num_heads = [6, 6, 6, 6, 6, 6]
window_size = 8
mlp_ratio = 2.0
qkv_bias = True
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
drop_path_rate = 0.1
hidden_act = 'gelu'
use_absolute_embeddings = False
initializer_range = 0.02
layer_norm_eps = 1e-05
upscale = 2
img_range = 1.0
resi_connection = '1conv'
upsampler = 'pixelshuffle'
**kwargs
)
Parameters
image_size (int, optional, defaults to 64) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 1) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 180) —
Dimensionality of patch embedding.
depths (list(int), optional, defaults to [6, 6, 6, 6, 6, 6]) —
Depth of each layer in the Transformer encoder.
num_heads (list(int), optional, defaults to [6, 6, 6, 6, 6, 6]) —
Number of attention heads in each layer of the Transformer encoder.
window_size (int, optional, defaults to 8) —
Size of windows.
mlp_ratio (float, optional, defaults to 2.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
qkv_bias (bool, optional, defaults to True) —
Whether or not a learnable bias should be added to the queries, keys and values.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
attention_probs_dropout_prob (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
use_absolute_embeddings (bool, optional, defaults to False) —
Whether or not to add absolute position embeddings to the patch embeddings.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
upscale (int, optional, defaults to 2) —
The upscale factor for the image. 2/3/4/8 for image super resolution, 1 for denoising and compress artifact
reduction
img_range (float, optional, defaults to 1.) —
The range of the values of the input image.
resi_connection (str, optional, defaults to "1conv") —
The convolutional block to use before the residual connection in each stage.
upsampler (str, optional, defaults to "pixelshuffle") —
The reconstruction reconstruction module. Can be ‘pixelshuffle’/‘pixelshuffledirect’/‘nearest+conv’/None.
This is the configuration class to store the configuration of a Swin2SRModel. It is used to instantiate a Swin
Transformer v2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Swin Transformer v2
caidas/swin2sr-classicalsr-x2-64 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Swin2SRConfig, Swin2SRModel
# Initializing a Swin2SR caidas/swin2sr-classicalsr-x2-64 style configuration
configuration = Swin2SRConfig()
# Initializing a model (with random weights) from the caidas/swin2sr-classicalsr-x2-64 style configuration
model = Swin2SRModel(configuration)
# Accessing the model configuration
configuration = model.config
Swin2SRModel
class transformers.Swin2SRModel
<
source
>
(
config
)
Parameters
config (Swin2SRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Swin2SR Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values
head_mask: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
Swin2SRImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Swin2SRConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Swin2SRModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, Swin2SRModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("caidas/swin2SR-classical-sr-x2-64")
model = Swin2SRModel.from_pretrained("caidas/swin2SR-classical-sr-x2-64")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 180, 488, 648]
Swin2SRForImageSuperResolution
class transformers.Swin2SRForImageSuperResolution
<
source
>
(
config
)
Parameters
config (Swin2SRConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Swin2SR Model transformer with an upsampler head on top for image super resolution and restoration.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.ImageSuperResolutionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
Swin2SRImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.ImageSuperResolutionOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageSuperResolutionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Swin2SRConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Reconstruction loss.
reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed images, possibly upscaled.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states
(also called feature maps) of the model at the output of each stage.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Swin2SRForImageSuperResolution forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
import numpy as np
from PIL import Image
import requests
from transformers import AutoImageProcessor, Swin2SRForImageSuperResolution
processor = AutoImageProcessor.from_pretrained("caidas/swin2SR-classical-sr-x2-64")
model = Swin2SRForImageSuperResolution.from_pretrained("caidas/swin2SR-classical-sr-x2-64")
url = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare image for the model
inputs = processor(image, return_tensors="pt")
# forward pass
with torch.no_grad():
... outputs = model(**inputs)
output = outputs.reconstruction.data.squeeze().float().cpu().clamp_(0, 1).numpy()
output = np.moveaxis(output, source=0, destination=-1)
output = (output * 255.0).round().astype(np.uint8) # float32 to uint8
# you can visualize `output` with `Image.fromarray`
←Swin Transformer V2
Table Transformer→
Swin2SR
Overview
Resources
Swin2SRImageProcessor
Swin2SRConfig
Swin2SRModel
Swin2SRForImageSuperResolution
|
Funnel Transformer
Overview
The Funnel Transformer model was proposed in the paper Funnel-Transformer: Filtering out Sequential Redundancy for
Efficient Language Processing. It is a bidirectional transformer model, like
BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks
(CNN) in computer vision.
The abstract from the paper is the following:
With the success of language pretraining, it is highly desirable to develop more efficient architectures of good
scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the
much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only
require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which
gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More
importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further
improve the model capacity. In addition, to perform token-level predictions as required by common pretraining
objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence
via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on
a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading
comprehension.
Tips:
Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states.
The base model therefore has a final sequence length that is a quarter of the original one. This model can be used
directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other
tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same
sequence length as the input.
For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That’s why there are two versions of each checkpoint. The version suffixed with “-base” contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers.
The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be
used for FunnelModel, FunnelForPreTraining,
FunnelForMaskedLM, FunnelForTokenClassification and
FunnelForQuestionAnswering. The second ones should be used for
FunnelBaseModel, FunnelForSequenceClassification and
FunnelForMultipleChoice.
This model was contributed by sgugger. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FunnelConfig
class transformers.FunnelConfig
<
source
>
(
vocab_size = 30522
block_sizes = [4, 4, 4]
block_repeats = None
num_decoder_layers = 2
d_model = 768
n_head = 12
d_head = 64
d_inner = 3072
hidden_act = 'gelu_new'
hidden_dropout = 0.1
attention_dropout = 0.1
activation_dropout = 0.0
initializer_range = 0.1
initializer_std = None
layer_norm_eps = 1e-09
pooling_type = 'mean'
attention_type = 'relative_shift'
separate_cls = True
truncate_seq = True
pool_q_only = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Funnel transformer. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling FunnelModel or TFFunnelModel.
block_sizes (List[int], optional, defaults to [4, 4, 4]) —
The sizes of the blocks used in the model.
block_repeats (List[int], optional) —
If passed along, each layer of each block is repeated the number of times indicated.
num_decoder_layers (int, optional, defaults to 2) —
The number of layers in the decoder (when not using the base model).
d_model (int, optional, defaults to 768) —
Dimensionality of the model’s hidden states.
n_head (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
d_head (int, optional, defaults to 64) —
Dimensionality of the model’s heads.
d_inner (int, optional, defaults to 3072) —
Inner dimension in the feed-forward blocks.
hidden_act (str or callable, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout probability for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout probability used between the two layers of the feed-forward blocks.
initializer_range (float, optional, defaults to 0.1) —
The upper bound of the uniform initializer for initializing all weight matrices in attention layers.
initializer_std (float, optional) —
The standard deviation of the normal initializer for initializing the embedding matrix and the weight of
linear layers. Will default to 1 for the embedding matrix and the value given by Xavier initialization for
linear layers.
layer_norm_eps (float, optional, defaults to 1e-9) —
The epsilon used by the layer normalization layers.
pooling_type (str, optional, defaults to "mean") —
Possible values are "mean" or "max". The way pooling is performed at the beginning of each block.
attention_type (str, optional, defaults to "relative_shift") —
Possible values are "relative_shift" or "factorized". The former is faster on CPU/GPU while the latter
is faster on TPU.
separate_cls (bool, optional, defaults to True) —
Whether or not to separate the cls token when applying pooling.
truncate_seq (bool, optional, defaults to False) —
When using separate_cls, whether or not to truncate the last token when pooling, to avoid getting a
sequence length that is not a multiple of 2.
pool_q_only (bool, optional, defaults to False) —
Whether or not to apply the pooling only to the query or to query, key and values for the attention layers.
This is the configuration class to store the configuration of a FunnelModel or a TFBertModel. It is used to
instantiate a Funnel Transformer model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the Funnel
Transformer funnel-transformer/small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
FunnelTokenizer
class transformers.FunnelTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '<unk>'
sep_token = '<sep>'
pad_token = '<pad>'
cls_token = '<cls>'
mask_token = '<mask>'
bos_token = '<s>'
eos_token = '</s>'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "<sep>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "<cls>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
bos_token (str, optional, defaults to "<s>") —
The beginning of sentence token.
eos_token (str, optional, defaults to "</s>") —
The end of sentence token.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a Funnel Transformer tokenizer. Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Funnel
Transformer sequence pair mask has the following format:
Copied
2 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
FunnelTokenizerFast
class transformers.FunnelTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '<unk>'
sep_token = '<sep>'
pad_token = '<pad>'
cls_token = '<cls>'
mask_token = '<mask>'
bos_token = '<s>'
eos_token = '</s>'
clean_text = True
tokenize_chinese_chars = True
strip_accents = None
wordpieces_prefix = '##'
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "<sep>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "<cls>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
clean_text (bool, optional, defaults to True) —
Whether or not to clean the text before tokenization by removing any control characters and replacing all
whitespaces by the classic one.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this
issue).
bos_token (str, optional, defaults to "<s>") —
The beginning of sentence token.
eos_token (str, optional, defaults to "</s>") —
The end of sentence token.
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
wordpieces_prefix (str, optional, defaults to "##") —
The prefix for subwords.
Construct a “fast” Funnel Transformer tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Funnel sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Funnel
Transformer sequence pair mask has the following format:
Copied
2 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
Funnel specific outputs
class transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) —
Total loss of the ELECTRA-style objective.
logits (torch.FloatTensor of shape (batch_size, sequence_length)) —
Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of FunnelForPreTraining.
class transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput
<
source
>
(
logits: tf.Tensor = None
hidden_states: Tuple[tf.Tensor] | None = None
attentions: Tuple[tf.Tensor] | None = None
)
Parameters
logits (tf.Tensor of shape (batch_size, sequence_length)) —
Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of FunnelForPreTraining.
FunnelBaseModel
class transformers.FunnelBaseModel
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called
decoder) or any task-specific head on top.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelBaseModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FunnelBaseModel
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/small-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FunnelModel
class transformers.FunnelModel
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FunnelModel
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = FunnelModel.from_pretrained("funnel-transformer/small")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FunnelModelForPreTraining
class transformers.FunnelForPreTraining
<
source
>
(
config: FunnelConfig
)
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the ELECTRA-style loss. Input should be a sequence of tokens (see input_ids
docstring) Indices should be in [0, 1]:
0 indicates the token is an original token,
1 indicates the token was replaced.
Returns
transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss of the ELECTRA-style objective.
logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, FunnelForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = FunnelForPreTraining.from_pretrained("funnel-transformer/small")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
logits = model(**inputs).logits
FunnelForMaskedLM
class transformers.FunnelForMaskedLM
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Transformer Model with a language modeling head on top.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FunnelForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = FunnelForMaskedLM.from_pretrained("funnel-transformer/small")
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of <mask>
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-<mask> tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
FunnelForSequenceClassification
class transformers.FunnelForSequenceClassification
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Transformer Model with a sequence classification/regression head on top (two linear layer on top of the
first timestep of the last hidden state) e.g. for GLUE tasks.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, FunnelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = FunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, FunnelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = FunnelForSequenceClassification.from_pretrained(
... "funnel-transformer/small-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
FunnelForMultipleChoice
class transformers.FunnelForMultipleChoice
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Transformer Model with a multiple choice classification head on top (two linear layer on top of the first
timestep of the last hidden state, and a softmax) e.g. for RocStories/SWAG tasks.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FunnelForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelForMultipleChoice.from_pretrained("funnel-transformer/small-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
FunnelForTokenClassification
class transformers.FunnelForTokenClassification
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Transformer Model with a token classification head on top (a linear layer on top of the hidden-states
output) e.g. for Named-Entity-Recognition (NER) tasks.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FunnelForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = FunnelForTokenClassification.from_pretrained("funnel-transformer/small")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
FunnelForQuestionAnswering
class transformers.FunnelForQuestionAnswering
<
source
>
(
config: FunnelConfig
)
Parameters
config (FunnelConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Transformer Model with a span classification head on top for extractive question-answering tasks like SQuAD
(a linear layer on top of the hidden-states output to compute span start logits and span end logits).
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FunnelConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FunnelForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FunnelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = FunnelForQuestionAnswering.from_pretrained("funnel-transformer/small")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFFunnelBaseModel
class transformers.TFFunnelBaseModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called
decoder) or any task-specific head on top.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelBaseModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelBaseModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/small-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFFunnelModel
class transformers.TFFunnelModel
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelModel.from_pretrained("funnel-transformer/small")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFFunnelModelForPreTraining
class transformers.TFFunnelForPreTraining
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel model with a binary classification head on top as used during pretraining for identifying generated tokens.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: bool = False
**kwargs
)
→
transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput or tuple(tf.Tensor)
A transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoTokenizer, TFFunnelForPreTraining
import torch
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelForPreTraining.from_pretrained("funnel-transformer/small")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(inputs).logits
TFFunnelForMaskedLM
class transformers.TFFunnelForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Model with a language modeling head on top.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelForMaskedLM.from_pretrained("funnel-transformer/small")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFFunnelForSequenceClassification
class transformers.TFFunnelForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = TFFunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFFunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFFunnelForMultipleChoice
class transformers.TFFunnelForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base")
model = TFFunnelForMultipleChoice.from_pretrained("funnel-transformer/small-base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFFunnelForTokenClassification
class transformers.TFFunnelForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelForTokenClassification.from_pretrained("funnel-transformer/small")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFFunnelForQuestionAnswering
class transformers.TFFunnelForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (XxxConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Funnel Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: bool = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (FunnelConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFFunnelForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFFunnelForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelForQuestionAnswering.from_pretrained("funnel-transformer/small")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
←FSMT
GPT→
Funnel Transformer
Overview
Documentation resources
FunnelConfig
FunnelTokenizer
FunnelTokenizerFast
Funnel specific outputs
FunnelBaseModel
FunnelModel
FunnelModelForPreTraining
FunnelForMaskedLM
FunnelForSequenceClassification
FunnelForMultipleChoice
FunnelForTokenClassification
FunnelForQuestionAnswering
TFFunnelBaseModel
TFFunnelModel
TFFunnelModelForPreTraining
TFFunnelForMaskedLM
TFFunnelForSequenceClassification
TFFunnelForMultipleChoice
TFFunnelForTokenClassification
TFFunnelForQuestionAnswering
|
Blenderbot
DISCLAIMER: If you see something strange, file a Github Issue .
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
Tips:
Blenderbot is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
This model was contributed by sshleifer. The authors’ code can be found here .
Implementation Notes
Blenderbot uses a standard seq2seq model transformer based architecture.
Available checkpoints can be found in the model hub.
This is the default Blenderbot model class. However, some smaller checkpoints, such as
facebook/blenderbot_small_90M, have a different architecture and consequently should be used with
BlenderbotSmall.
Usage
Here is an example of model usage:
Copied
from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print(tokenizer.batch_decode(reply_ids))
["<s> That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?</s>"]
Documentation resources
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotConfig
class transformers.BlenderbotConfig
<
source
>
(
vocab_size = 8008
max_position_embeddings = 128
encoder_layers = 2
encoder_ffn_dim = 10240
encoder_attention_heads = 32
decoder_layers = 24
decoder_ffn_dim = 10240
decoder_attention_heads = 32
encoder_layerdrop = 0.0
decoder_layerdrop = 0.0
use_cache = True
is_encoder_decoder = True
activation_function = 'gelu'
d_model = 2560
dropout = 0.1
attention_dropout = 0.0
activation_dropout = 0.0
init_std = 0.02
decoder_start_token_id = 1
scale_embedding = False
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
encoder_no_repeat_ngram_size = 3
forced_eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50265) —
Vocabulary size of the Blenderbot model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling BlenderbotModel or TFBlenderbotModel.
d_model (int, optional, defaults to 1024) —
Dimensionality of the layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of encoder layers.
decoder_layers (int, optional, defaults to 12) —
Number of decoder layers.
encoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
decoder_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
encoder_ffn_dim (int, optional, defaults to 4096) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder.
activation_function (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer.
max_position_embeddings (int, optional, defaults to 128) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
init_std (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
encoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layerdrop (float, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models)
forced_eos_token_id (int, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length is reached. Usually set to
eos_token_id.
This is the configuration class to store the configuration of a BlenderbotModel. It is used to instantiate an
Blenderbot model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Blenderbot
facebook/blenderbot-3B architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import BlenderbotConfig, BlenderbotModel
# Initializing a Blenderbot facebook/blenderbot-3B style configuration
configuration = BlenderbotConfig()
# Initializing a model (with random weights) from the facebook/blenderbot-3B style configuration
model = BlenderbotModel(configuration)
# Accessing the model configuration
configuration = model.config
BlenderbotTokenizer
class transformers.BlenderbotTokenizer
<
source
>
(
vocab_file
merges_file
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Blenderbot tokenizer detect beginning of words by the preceding space).
Constructs a Blenderbot tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import BlenderbotTokenizer
tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B")
tokenizer.add_prefix_space = False
tokenizer("Hello world")["input_ids"]
[47, 921, 86, 1085, 2]
tokenizer(" Hello world")["input_ids"]
[6950, 1085, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Will be ignored
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Blenderbot sequence has the following format:
single sequence: X </s>
BlenderbotTokenizerFast
class transformers.BlenderbotTokenizerFast
<
source
>
(
vocab_file = None
merges_file = None
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
trim_offsets = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Blenderbot tokenizer detect beginning of words by the preceding space).
trim_offsets (bool, optional, defaults to True) —
Whether the post processing step should trim offsets to avoid including whitespaces.
Construct a “fast” Blenderbot tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
Copied
from transformers import BlenderbotTokenizerFast
tokenizer = BlenderbotTokenizerFast.from_pretrained("facebook/blenderbot-3B")
tokenizer("Hello world")["input_ids"]
[6950, 1085, 2]
tokenizer(" Hello world")["input_ids"]
[6950, 1085, 2]
You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added
token_ids_1 (List[int], optional) —
Will be ignored
Returns
List[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A Blenderbot sequence has the following format:
single sequence: X </s>
BlenderbotModel
See transformers.BartModel for arguments to forward and generate
class transformers.BlenderbotModel
<
source
>
(
config: BlenderbotConfig
)
Parameters
config (BlenderbotConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The bare Blenderbot Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BlenderbotModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, BlenderbotModel
model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt")
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 6, 1280]
BlenderbotForConditionalGeneration
See BartForConditionalGeneration for arguments to forward and generate
class transformers.BlenderbotForConditionalGeneration
<
source
>
(
config: BlenderbotConfig
)
Parameters
config (BlenderbotConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The Blenderbot Model with a language modeling head. Can be used for summarization.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
decoder_head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape
(batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you
can choose to directly pass an embedded representation. This is useful if you want more control over how to
convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The BlenderbotForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Conversation example:
Copied
from transformers import AutoTokenizer, BlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)
Human: My friends are cool but they eat too many carbs.
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
Bot: That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?
REPLY = "I'm not sure"
print("Human: ", REPLY)
Human: I'm not sure
NEXT_UTTERANCE = (
... "My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. "
... "Are they trying to lose weight or are they just trying to be healthier?</s> "
... "<s> I'm not sure."
... )
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
Bot: I see. Well, it's good that they're trying to change their eating habits.
BlenderbotForCausalLM
class transformers.BlenderbotForCausalLM
<
source
>
(
config
)
forward
<
source
>
(
input_ids: LongTensor = None
attention_mask: typing.Optional[torch.Tensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.Tensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of
all decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding
(see past_key_values).
1 for tokens that are not masked,
0 for tokens that are masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
from transformers import AutoTokenizer, BlenderbotForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
model = BlenderbotForCausalLM.from_pretrained(
... "facebook/blenderbot-400M-distill", add_cross_attention=False
... )
assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
list(logits.shape) == expected_shape
True
TFBlenderbotModel
class transformers.TFBlenderbotModel
<
source
>
(
*args
**kwargs
)
Parameters
config (BlenderbotConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare BLENDERBOT Model outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None
past_key_values: List[tf.Tensor] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
**kwargs
)
→
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BlenderbotConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBlenderbotModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFBlenderbotModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
model = TFBlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFBlenderbotForConditionalGeneration
class transformers.TFBlenderbotForConditionalGeneration
<
source
>
(
*args
**kwargs
)
Parameters
config (BlenderbotConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The BLENDERBOT Model with a language modeling head. Can be used for summarization.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: tf.Tensor | None = None
attention_mask: tf.Tensor | None = None
decoder_input_ids: tf.Tensor | None = None
decoder_attention_mask: tf.Tensor | None = None
decoder_position_ids: tf.Tensor | None = None
head_mask: tf.Tensor | None = None
decoder_head_mask: tf.Tensor | None = None
cross_attn_head_mask: tf.Tensor | None = None
encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None
past_key_values: List[tf.Tensor] | None = None
inputs_embeds: tf.Tensor | None = None
decoder_inputs_embeds: tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
Parameters
input_ids (tf.Tensor of shape ({0})) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (tf.Tensor of shape ({0}), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) —
will be made by default and ignore pad tokens. It is not recommended to set this for most use cases.
decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tf.FloatTensor, optional) —
hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
of shape (batch_size, sequence_length, hidden_size) is a sequence of
past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) —
contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional, defaults to True) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values). Set to False during training, True during generation
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (BlenderbotConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be
used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The TFBlenderbotForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Conversation example::
Copied
from transformers import AutoTokenizer, TFBlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = TFBlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = AutoTokenizer.from_pretrained(mname)
UTTERANCE = "My friends are cool but they eat too many carbs."
print("Human: ", UTTERANCE)
inputs = tokenizer([UTTERANCE], return_tensors="tf")
reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0])
REPLY = "I'm not sure"
print("Human: ", REPLY)
NEXT_UTTERANCE = (
... "My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. "
... "Are they trying to lose weight or are they just trying to be healthier?</s> "
... "<s> I'm not sure."
... )
inputs = tokenizer([NEXT_UTTERANCE], return_tensors="tf")
next_reply_ids = model.generate(**inputs)
print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0])
FlaxBlenderbotModel
class transformers.FlaxBlenderbotModel
<
source
>
(
config: BlenderbotConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BlenderbotConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MBart Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBlenderbotPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotModel
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
model = FlaxBlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration
model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration
model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
last_decoder_hidden_states = outputs.last_hidden_state
FlaxBlenderbotForConditionalGeneration
class transformers.FlaxBlenderbotForConditionalGeneration
<
source
>
(
config: BlenderbotConfig
input_shape: typing.Tuple[int] = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (BlenderbotConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The Blenderbot Model with a language modeling head. Can be used for summarization.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a Flax Linen
flax.nn.Module subclass. Use it as a
regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
decoder_input_ids: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (BlenderbotConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The FlaxBlenderbotPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Conversation example::
Copied
from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration
model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], max_length=1024, return_tensors="np")
# Generate Reply
reply_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=5, early_stopping=True).sequences
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in reply_ids])
encode
<
source
>
(
input_ids: Array
attention_mask: typing.Optional[jax.Array] = None
position_ids: typing.Optional[jax.Array] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (jnp.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Example:
Copied
from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration
model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decode
<
source
>
(
decoder_input_ids
encoder_outputs
encoder_attention_mask: typing.Optional[jax.Array] = None
decoder_attention_mask: typing.Optional[jax.Array] = None
decoder_position_ids: typing.Optional[jax.Array] = None
past_key_values: dict = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
train: bool = False
params: dict = None
dropout_rng: PRNGKey = None
)
→
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
For translation and summarization training, decoder_input_ids should be provided. If no
decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right
for denoising pre-training following the paper.
encoder_outputs (tuple(tuple(jnp.ndarray)) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
If you want to change padding behavior, you should modify to your needs. See diagram 1 in the
paper for more information on the default strategy.
decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.max_position_embeddings - 1].
past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) —
Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast
auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length].
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value
states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting.
Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
Example:
Copied
import jax.numpy as jnp
from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration
model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill")
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
text = "My friends are cool but they eat too many carbs."
inputs = tokenizer(text, max_length=1024, return_tensors="jax")
encoder_outputs = model.encode(**inputs)
decoder_start_token_id = model.config.decoder_start_token_id
decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id
outputs = model.decode(decoder_input_ids, encoder_outputs)
logits = outputs.logits
←BioGpt
Blenderbot Small→
Blenderbot
Overview
Implementation Notes
Usage
Documentation resources
BlenderbotConfig
BlenderbotTokenizer
BlenderbotTokenizerFast
BlenderbotModel
BlenderbotForConditionalGeneration
BlenderbotForCausalLM
TFBlenderbotModel
TFBlenderbotForConditionalGeneration
FlaxBlenderbotModel
FlaxBlenderbotForConditionalGeneration
|
BLIP-2
Overview
The BLIP-2 model was proposed in BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models by
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer
encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon Flamingo, an 80 billion parameter model, by 8.7%
on zero-shot VQAv2 with 54x fewer trainable parameters.
The abstract from the paper is the following:
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model’s emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
Tips:
BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it’s recommended to use the generate method.
One can use Blip2Processor to prepare images for the model, and decode the predicted tokens ID’s back to text.
BLIP-2 architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2.
Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found here.
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Blip2Config
class transformers.Blip2Config
<
source
>
(
vision_config = None
qformer_config = None
text_config = None
num_query_tokens = 32
**kwargs
)
Parameters
vision_config (dict, optional) —
Dictionary of configuration options used to initialize Blip2VisionConfig.
qformer_config (dict, optional) —
Dictionary of configuration options used to initialize Blip2QFormerConfig.
text_config (dict, optional) —
Dictionary of configuration options used to initialize any PretrainedConfig.
num_query_tokens (int, optional, defaults to 32) —
The number of query tokens passed through the Transformer.
kwargs (optional) —
Dictionary of keyword arguments.
Blip2Config is the configuration class to store the configuration of a Blip2ForConditionalGeneration. It is
used to instantiate a BLIP-2 model according to the specified arguments, defining the vision model, Q-Former model
and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to
that of the BLIP-2 Salesforce/blip2-opt-2.7b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import (
... Blip2VisionConfig,
... Blip2QFormerConfig,
... OPTConfig,
... Blip2Config,
... Blip2ForConditionalGeneration,
... )
# Initializing a Blip2Config with Salesforce/blip2-opt-2.7b style configuration
configuration = Blip2Config()
# Initializing a Blip2ForConditionalGeneration (with random weights) from the Salesforce/blip2-opt-2.7b style configuration
model = Blip2ForConditionalGeneration(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a Blip2Config from a Blip2VisionConfig, Blip2QFormerConfig and any PretrainedConfig
# Initializing BLIP-2 vision, BLIP-2 Q-Former and language model configurations
vision_config = Blip2VisionConfig()
qformer_config = Blip2QFormerConfig()
text_config = OPTConfig()
config = Blip2Config.from_text_vision_configs(vision_config, qformer_config, text_config)
from_vision_qformer_text_configs
<
source
>
(
vision_config: Blip2VisionConfig
qformer_config: Blip2QFormerConfig
text_config: PretrainedConfig
**kwargs
)
→
Blip2Config
Returns
Blip2Config
An instance of a configuration object
Instantiate a Blip2Config (or a derived class) from a BLIP-2 vision model, Q-Former and language model
configurations.
Blip2VisionConfig
class transformers.Blip2VisionConfig
<
source
>
(
hidden_size = 1408
intermediate_size = 6144
num_hidden_layers = 39
num_attention_heads = 16
image_size = 224
patch_size = 14
hidden_act = 'gelu'
layer_norm_eps = 1e-05
attention_dropout = 0.0
initializer_range = 1e-10
qkv_bias = True
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 1408) —
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (int, optional, defaults to 6144) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_hidden_layers (int, optional, defaults to 39) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 14) —
The size (resolution) of each patch.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"gelu" are supported. layer_norm_eps (float, optional, defaults
to 1e-5): The epsilon used by the layer normalization layers.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries and values in the self-attention layers.
This is the configuration class to store the configuration of a Blip2VisionModel. It is used to instantiate a
BLIP-2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration defaults will yield a similar configuration to that of the BLIP-2
Salesforce/blip2-opt-2.7b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Blip2VisionConfig, Blip2VisionModel
# Initializing a Blip2VisionConfig with Salesforce/blip2-opt-2.7b style configuration
configuration = Blip2VisionConfig()
# Initializing a Blip2VisionModel (with random weights) from the Salesforce/blip2-opt-2.7b style configuration
model = Blip2VisionModel(configuration)
# Accessing the model configuration
configuration = model.config
Blip2QFormerConfig
class transformers.Blip2QFormerConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
position_embedding_type = 'absolute'
cross_attention_frequency = 2
encoder_hidden_size = 1408
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling the model.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
hidden_act (str or Callable, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
position_embedding_type (str, optional, defaults to "absolute") —
Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For
positional embeddings use "absolute". For more information on "relative_key", please refer to
Self-Attention with Relative Position Representations (Shaw et al.).
For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models
with Better Relative Position Embeddings (Huang et al.).
cross_attention_frequency (int, optional, defaults to 2) —
The frequency of adding cross-attention to the Transformer layers.
encoder_hidden_size (int, optional, defaults to 1408) —
The hidden size of the hidden states for cross-attention.
This is the configuration class to store the configuration of a Blip2QFormerModel. It is used to instantiate a
BLIP-2 Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-2
Salesforce/blip2-opt-2.7b architecture. Configuration objects
inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from
PretrainedConfig for more information.
Note that Blip2QFormerModel is very similar to BertLMHeadModel with interleaved cross-attention.
Examples:
Copied
from transformers import Blip2QFormerConfig, Blip2QFormerModel
# Initializing a BLIP-2 Salesforce/blip2-opt-2.7b style configuration
configuration = Blip2QFormerConfig()
# Initializing a model (with random weights) from the Salesforce/blip2-opt-2.7b style configuration
model = Blip2QFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
Blip2Processor
class transformers.Blip2Processor
<
source
>
(
image_processor
tokenizer
)
Parameters
image_processor (BlipImageProcessor) —
An instance of BlipImageProcessor. The image processor is a required input.
tokenizer (AutoTokenizer) —
An instance of [‘PreTrainedTokenizer`]. The tokenizer is a required input.
Constructs a BLIP-2 processor which wraps a BLIP image processor and an OPT/T5 tokenizer into a single processor.
BlipProcessor offers all the functionalities of BlipImageProcessor and AutoTokenizer. See the docstring
of __call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please
refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to PreTrainedTokenizer’s decode(). Please refer
to the docstring of this method for more information.
Blip2VisionModel
class transformers.Blip2VisionModel
<
source
>
(
config: Blip2VisionConfig
)
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for
details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig'>) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Blip2VisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Blip2QFormerModel
class transformers.Blip2QFormerModel
<
source
>
(
config: Blip2QFormerConfig
)
Querying Transformer (Q-Former), used in BLIP-2.
forward
<
source
>
(
query_embeds
attention_mask = None
head_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
past_key_values = None
use_cache = None
output_attentions = None
output_hidden_states = None
return_dict = None
)
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional):
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of:
shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)): Contains precomputed key and
value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are
used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key
value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape
(batch_size, sequence_length).
use_cache (bool, optional):
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Blip2Model
class transformers.Blip2Model
<
source
>
(
config: Blip2Config
)
Parameters
config (Blip2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP-2 Model for generating text and image features. The model consists of a vision encoder, Querying Transformer
(Q-Former) and a language model.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: FloatTensor
input_ids: FloatTensor
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for
details.
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be
provided to serve as text prompt, which the language model can continue.
Indices can be obtained using Blip2Processor. See Blip2Processor.__call__() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an
encoder-decoder language model (like T5) is used.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are decoder input IDs?
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
Only relevant in case an encoder-decoder language model (like T5) is used.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
A transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig'>) and inputs.
loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Language modeling loss from the language model.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model.
vision_outputs (BaseModelOutputWithPooling) — Outputs of the vision encoder.
qformer_outputs (BaseModelOutputWithPoolingAndCrossAttentions) — Outputs of the Q-Former (Querying Transformer).
language_model_outputs (CausalLMOutputWithPast or Seq2SeqLMOutput) — Outputs of the language model.
The Blip2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2Model
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "Question: how many cats are there? Answer:"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16)
outputs = model(**inputs)
get_text_features
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
decoder_input_ids: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
text_outputs (CausalLMOutputWithPast, or tuple(torch.FloatTensor) if return_dict=False)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values
is used, optionally only the last decoder_input_ids have to be input (see past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at T5
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
text_outputs (CausalLMOutputWithPast, or tuple(torch.FloatTensor) if return_dict=False)
The language model outputs. If return_dict=True, the output is a CausalLMOutputWithPast that
contains the language model logits, the past key values and the hidden states if
output_hidden_states=True.
The Blip2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import torch
from transformers import AutoTokenizer, Blip2Model
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
model.to(device)
tokenizer = AutoTokenizer.from_pretrained("Salesforce/blip2-opt-2.7b")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt").to(device)
text_features = model.get_text_features(**inputs)
get_image_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for
details.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor)
The vision model outputs. If return_dict=True, the output is a BaseModelOutputWithPooling that
contains the image features, the pooled image features and the hidden states if
output_hidden_states=True.
The Blip2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import torch
from PIL import Image
import requests
from transformers import AutoProcessor, Blip2Model
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
model.to(device)
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt").to(device, torch.float16)
image_outputs = model.get_image_features(**inputs)
get_qformer_features
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for
details.
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be
provided to serve as text prompt, which the language model can continue.
Indices can be obtained using Blip2Processor. See Blip2Processor.__call__() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an
encoder-decoder language model (like T5) is used.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are decoder input IDs?
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
Only relevant in case an encoder-decoder language model (like T5) is used.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor)
The vision model outputs. If return_dict=True, the output is a BaseModelOutputWithPooling that
contains the image features, the pooled image features and the hidden states if
output_hidden_states=True.
The Blip2Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
import torch
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2Model
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt").to(device, torch.float16)
qformer_outputs = model.get_qformer_features(**inputs)
Blip2ForConditionalGeneration
class transformers.Blip2ForConditionalGeneration
<
source
>
(
config: Blip2Config
)
Parameters
config (Blip2Config) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
BLIP-2 Model for generating text given an image and an optional text prompt. The model consists of a vision
encoder, Querying Transformer (Q-Former) and a language model.
One can optionally pass input_ids to the model, which serve as a text prompt, to make the language model continue
the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
pixel_values: FloatTensor
input_ids: FloatTensor
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for
details.
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be
provided to serve as text prompt, which the language model can continue.
Indices can be obtained using Blip2Processor. See Blip2Processor.__call__() for details.
What are input IDs?
attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an
encoder-decoder language model (like T5) is used.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details. What are decoder input IDs?
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
Only relevant in case an encoder-decoder language model (like T5) is used.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
A transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (<class 'transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig'>) and inputs.
loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Language modeling loss from the language model.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model.
vision_outputs (BaseModelOutputWithPooling) — Outputs of the vision encoder.
qformer_outputs (BaseModelOutputWithPoolingAndCrossAttentions) — Outputs of the Q-Former (Querying Transformer).
language_model_outputs (CausalLMOutputWithPast or Seq2SeqLMOutput) — Outputs of the language model.
The Blip2ForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Image captioning (without providing a text prompt):
Copied
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained(
... "Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16
... )
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)
two cats laying on a couch
Visual question answering (prompt = question):
Copied
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained(
... "Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16
... )
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "Question: how many cats are there? Answer:"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)
two
generate
<
source
>
(
pixel_values: FloatTensor
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
**generate_kwargs
)
→
captions (list)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Input images to be processed.
input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
The sequence used as a prompt for the generation.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices
Returns
captions (list)
A list of strings of length batch_size * num_captions.
Overrides generate function to be able to use the model as a conditional generator.
←BLIP
BridgeTower→
BLIP-2
Overview
Resources
Blip2Config
Blip2VisionConfig
Blip2QFormerConfig
Blip2Processor
Blip2VisionModel
Blip2QFormerModel
Blip2Model
Blip2ForConditionalGeneration
|
RoFormer
Overview
The RoFormer model was proposed in RoFormer: Enhanced Transformer with Rotary Position Embedding by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
The abstract from the paper is the following:
Position encoding in transformer architecture provides supervision for dependency modeling between elements at
different positions in the sequence. We investigate various methods to encode positional information in
transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The
proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative
position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of
being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and
capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced
transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We
release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing
experiment for English benchmark will soon be updated.
Tips:
RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown
improved performance on classification tasks with long texts.
This model was contributed by junnyu. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RoFormerConfig
class transformers.RoFormerConfig
<
source
>
(
vocab_size = 50000
embedding_size = None
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 1536
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
rotary_value = False
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50000) —
Vocabulary size of the RoFormer model. Defines the number of different tokens that can be represented by
the inputs_ids passed when calling RoFormerModel or TFRoFormerModel.
embedding_size (int, optional, defaults to None) —
Dimensionality of the encoder layers and the pooler layer. Defaults to the hidden_size if not provided.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 1536) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 1536).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling RoFormerModel or TFRoFormerModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
is_decoder (bool, optional, defaults to False) —
Whether the model is used as a decoder or not. If False, the model is used as an encoder.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True.
rotary_value (bool, optional, defaults to False) —
Whether or not apply rotary position embeddings on value layer.
This is the configuration class to store the configuration of a RoFormerModel. It is used to instantiate an
RoFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the RoFormer
junnyu/roformer_chinese_base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import RoFormerModel, RoFormerConfig
# Initializing a RoFormer junnyu/roformer_chinese_base style configuration
configuration = RoFormerConfig()
# Initializing a model from the junnyu/roformer_chinese_base style configuration
model = RoFormerModel(configuration)
# Accessing the model configuration
configuration = model.config
RoFormerTokenizer
class transformers.RoFormerTokenizer
<
source
>
(
vocab_file
do_lower_case = True
do_basic_tokenize = True
never_split = None
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Parameters
vocab_file (str) —
File containing the vocabulary.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
do_basic_tokenize (bool, optional, defaults to True) —
Whether or not to do basic tokenization before WordPiece.
never_split (Iterable, optional) —
Collection of tokens which will never be split during tokenization. Only has an effect when
do_basic_tokenize=True
unk_token (str, optional, defaults to "[UNK]") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
sep_token (str, optional, defaults to "[SEP]") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
pad_token (str, optional, defaults to "[PAD]") —
The token used for padding, for example when batching sequences of different lengths.
cls_token (str, optional, defaults to "[CLS]") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
mask_token (str, optional, defaults to "[MASK]") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
tokenize_chinese_chars (bool, optional, defaults to True) —
Whether or not to tokenize Chinese characters.
This should likely be deactivated for Japanese (see this
issue).
strip_accents (bool, optional) —
Whether or not to strip all accents. If this option is not specified, then it will be determined by the
value for lowercase (as in the original BERT).
Construct a RoFormer tokenizer. Based on Rust Jieba.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Example:
Copied
from transformers import RoFormerTokenizer
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
tokenizer.tokenize("今天天气非常好。")
['今', '天', '天', '气', '非常', '好', '。']
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A RoFormer sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False) —
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model method.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of token type IDs according to the given sequence(s).
Create a mask from the two sequences passed to be used in a sequence-pair classification task. A RoFormer
sequence pair mask has the following format:
Copied
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
If token_ids_1 is None, this method only returns the first portion of the mask (0s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
RoFormerTokenizerFast
class transformers.RoFormerTokenizerFast
<
source
>
(
vocab_file = None
tokenizer_file = None
do_lower_case = True
unk_token = '[UNK]'
sep_token = '[SEP]'
pad_token = '[PAD]'
cls_token = '[CLS]'
mask_token = '[MASK]'
tokenize_chinese_chars = True
strip_accents = None
**kwargs
)
Construct a “fast” RoFormer tokenizer (backed by HuggingFace’s tokenizers library).
RoFormerTokenizerFast is almost identical to BertTokenizerFast and runs end-to-end tokenization:
punctuation splitting and wordpiece. There are some difference between them when tokenizing Chinese.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Example:
Copied
from transformers import RoFormerTokenizerFast
tokenizer = RoFormerTokenizerFast.from_pretrained("junnyu/roformer_chinese_base")
tokenizer.tokenize("今天天气非常好。")
['今', '天', '天', '气', '非常', '好', '。']
build_inputs_with_special_tokens
<
source
>
(
token_ids_0
token_ids_1 = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A RoFormer sequence has the following format:
single sequence: [CLS] X [SEP]
pair of sequences: [CLS] A [SEP] B [SEP]
RoFormerModel
class transformers.RoFormerModel
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoFormer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
cross-attention is added between the self-attention layers, following the architecture described in Attention is
all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set
to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and
add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The RoFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoFormerModel
import torch
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerModel.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
RoFormerForCausalLM
class transformers.RoFormerForCausalLM
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a language modeling head on top for CLM fine-tuning.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
the model is configured as a decoder.
encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
[-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are
ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size].
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The RoFormerForCausalLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoFormerForCausalLM, RoFormerConfig
import torch
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
config = RoFormerConfig.from_pretrained("junnyu/roformer_chinese_base")
config.is_decoder = True
model = RoFormerForCausalLM.from_pretrained("junnyu/roformer_chinese_base", config=config)
inputs = tokenizer("今天天气非常好。", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
RoFormerForMaskedLM
class transformers.RoFormerForMaskedLM
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
encoder_hidden_states: typing.Optional[torch.FloatTensor] = None
encoder_attention_mask: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoFormerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoFormerForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
RoFormerForSequenceClassification
class transformers.RoFormerForSequenceClassification
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoFormerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, RoFormerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerForSequenceClassification.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RoFormerForSequenceClassification.from_pretrained("junnyu/roformer_chinese_base", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, RoFormerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerForSequenceClassification.from_pretrained("junnyu/roformer_chinese_base", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = RoFormerForSequenceClassification.from_pretrained(
... "junnyu/roformer_chinese_base", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
RoFormerForMultipleChoice
class transformers.RoFormerForMultipleChoice
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoFormerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoFormerForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerForMultipleChoice.from_pretrained("junnyu/roformer_chinese_base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
RoFormerForTokenClassification
class transformers.RoFormerForTokenClassification
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoFormerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoFormerForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerForTokenClassification.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
RoFormerForQuestionAnswering
class transformers.RoFormerForQuestionAnswering
<
source
>
(
config
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The RoFormerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, RoFormerForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = RoFormerForQuestionAnswering.from_pretrained("junnyu/roformer_chinese_base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
TFRoFormerModel
class transformers.TFRoFormerModel
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare RoFormer Model transformer outputing raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a
Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence
prediction (classification) objective during pretraining.
This output is usually not a good summary of the semantic content of the input, you’re often better with
averaging or pooling the sequence of hidden-states for the whole input sequence.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRoFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerModel.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFRoFormerForMaskedLM
class transformers.TFRoFormerForMaskedLM
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a language modeling head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRoFormerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerForMaskedLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf")
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0])
selected_logits = tf.gather_nd(logits[0], indices=mask_token_index)
predicted_token_id = tf.math.argmax(selected_logits, axis=-1)
Copied
labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"]
# mask labels of non-[MASK] tokens
labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
TFRoFormerForCausalLM
class transformers.TFRoFormerForCausalLM
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a language modeling head on top for CLM fine-tuning.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
Returns
transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional):
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerForCausalLM
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerForCausalLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFRoFormerForSequenceClassification
class transformers.TFRoFormerForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model transformer with a sequence classification/regression head on top e.g., for GLUE tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRoFormerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerForSequenceClassification.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFRoFormerForSequenceClassification.from_pretrained("junnyu/roformer_chinese_base", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
TFRoFormerForMultipleChoice
class transformers.TFRoFormerForMultipleChoice
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRoFormerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerForMultipleChoice
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerForMultipleChoice.from_pretrained("junnyu/roformer_chinese_base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True)
inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()}
outputs = model(inputs) # batch size is 1
# the linear classifier still needs to be trained
logits = outputs.logits
TFRoFormerForTokenClassification
class transformers.TFRoFormerForTokenClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss.
logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRoFormerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerForTokenClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerForTokenClassification.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf"
... )
logits = model(**inputs).logits
predicted_token_class_ids = tf.math.argmax(logits, axis=-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()]
Copied
labels = predicted_token_class_ids
loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
TFRoFormerForQuestionAnswering
class transformers.TFRoFormerForQuestionAnswering
<
source
>
(
*args
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
RoFormer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layer on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
start_positions: np.ndarray | tf.Tensor | None = None
end_positions: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
Parameters
input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to `False“) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (RoFormerConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFRoFormerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFRoFormerForQuestionAnswering
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = TFRoFormerForQuestionAnswering.from_pretrained("junnyu/roformer_chinese_base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
Copied
# target is "nice puppet"
target_start_index = tf.constant([14])
target_end_index = tf.constant([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = tf.math.reduce_mean(outputs.loss)
FlaxRoFormerModel
class transformers.FlaxRoFormerModel
<
source
>
(
config: RoFormerConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
The bare RoFormer Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRoFormerPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRoFormerModel
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerModel.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
FlaxRoFormerForMaskedLM
class transformers.FlaxRoFormerForMaskedLM
<
source
>
(
config: RoFormerConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
RoFormer Model with a language modeling head on top.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRoFormerPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRoFormerForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRoFormerForSequenceClassification
class transformers.FlaxRoFormerForSequenceClassification
<
source
>
(
config: RoFormerConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
RoFormer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRoFormerPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRoFormerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForSequenceClassification.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRoFormerForMultipleChoice
class transformers.FlaxRoFormerForMultipleChoice
<
source
>
(
config: RoFormerConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
RoFormer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a
softmax) e.g. for RocStories/SWAG tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRoFormerPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRoFormerForMultipleChoice
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForMultipleChoice.from_pretrained("junnyu/roformer_chinese_base")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True)
outputs = model(**{k: v[None, :] for k, v in encoding.items()})
logits = outputs.logits
FlaxRoFormerForTokenClassification
class transformers.FlaxRoFormerForTokenClassification
<
source
>
(
config: RoFormerConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
RoFormer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRoFormerPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRoFormerForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForTokenClassification.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("Hello, my dog is cute", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
FlaxRoFormerForQuestionAnswering
class transformers.FlaxRoFormerForQuestionAnswering
<
source
>
(
config: RoFormerConfig
input_shape: typing.Tuple = (1, 1)
seed: int = 0
dtype: dtype = <class 'jax.numpy.float32'>
_do_init: bool = True
**kwargs
)
Parameters
config (RoFormerConfig) — Model configuration class with all the parameters of the
model. Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype.
Note that this only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
RoFormer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading, saving and converting weights from PyTorch models)
This model is also a Flax Linen flax.linen.Module
subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to
general usage and behavior.
Finally, this model supports inherent JAX features such as:
Just-In-Time (JIT) compilation
Automatic Differentiation
Vectorization
Parallelization
__call__
<
source
>
(
input_ids
attention_mask = None
token_type_ids = None
head_mask = None
params: dict = None
dropout_rng: PRNGKey = None
train: bool = False
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (numpy.ndarray of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`:
1 indicates the head is not masked,
0 indicates the head is masked.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (RoFormerConfig) and inputs.
start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The FlaxRoFormerPreTrainedModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, FlaxRoFormerForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForQuestionAnswering.from_pretrained("junnyu/roformer_chinese_base")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="jax")
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
←RoCBert
RWKV→
RoFormer
Overview
Documentation resources
RoFormerConfig
RoFormerTokenizer
RoFormerTokenizerFast
RoFormerModel
RoFormerForCausalLM
RoFormerForMaskedLM
RoFormerForSequenceClassification
RoFormerForMultipleChoice
RoFormerForTokenClassification
RoFormerForQuestionAnswering
TFRoFormerModel
TFRoFormerForMaskedLM
TFRoFormerForCausalLM
TFRoFormerForSequenceClassification
TFRoFormerForMultipleChoice
TFRoFormerForTokenClassification
TFRoFormerForQuestionAnswering
FlaxRoFormerModel
FlaxRoFormerForMaskedLM
FlaxRoFormerForSequenceClassification
FlaxRoFormerForMultipleChoice
FlaxRoFormerForTokenClassification
FlaxRoFormerForQuestionAnswering
|
SpeechT5
Overview
The SpeechT5 model was proposed in SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
The abstract from the paper is the following:
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
This model was contributed by Matthijs. The original code can be found here.
SpeechT5Config
class transformers.SpeechT5Config
<
source
>
(
vocab_size = 81
hidden_size = 768
encoder_layers = 12
encoder_attention_heads = 12
encoder_ffn_dim = 3072
encoder_layerdrop = 0.1
decoder_layers = 6
decoder_ffn_dim = 3072
decoder_attention_heads = 12
decoder_layerdrop = 0.1
hidden_act = 'gelu'
positional_dropout = 0.1
hidden_dropout = 0.1
attention_dropout = 0.1
activation_dropout = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
scale_embedding = False
feat_extract_norm = 'group'
feat_proj_dropout = 0.0
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
decoder_start_token_id = 2
num_mel_bins = 80
speech_decoder_prenet_layers = 2
speech_decoder_prenet_units = 256
speech_decoder_prenet_dropout = 0.5
speaker_embedding_dim = 512
speech_decoder_postnet_layers = 5
speech_decoder_postnet_units = 256
speech_decoder_postnet_kernel = 5
speech_decoder_postnet_dropout = 0.5
reduction_factor = 2
max_speech_positions = 4000
max_text_positions = 450
encoder_max_relative_position = 160
use_guided_attention_loss = True
guided_attention_loss_num_heads = 2
guided_attention_loss_sigma = 0.4
guided_attention_loss_scale = 10.0
use_cache = True
is_encoder_decoder = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 81) —
Vocabulary size of the SpeechT5 model. Defines the number of different tokens that can be represented by
the inputs_ids passed to the forward method of SpeechT5Model.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
encoder_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
encoder_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
encoder_ffn_dim (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
encoder_layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
decoder_layers (int, optional, defaults to 6) —
Number of hidden layers in the Transformer decoder.
decoder_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer decoder.
decoder_ffn_dim (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer decoder.
decoder_layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
positional_dropout (float, optional, defaults to 0.1) —
The dropout probability for the text position encoding layers.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
activation_dropout (float, optional, defaults to 0.1) —
The dropout ratio for activations inside the fully connected layer.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
scale_embedding (bool, optional, defaults to False) —
Scale embeddings by diving by sqrt(d_model).
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in the speech encoder pre-net. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the speech encoder pre-net.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
speech encoder pre-net. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the speech encoder pre-net. The
length of conv_stride defines the number of convolutional layers and has to match the length of
conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the speech encoder pre-net.
The length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the speech encoder pre-net. For
reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
num_mel_bins (int, optional, defaults to 80) —
Number of mel features used per input features. Used by the speech decoder pre-net. Should correspond to
the value used in the SpeechT5Processor class.
speech_decoder_prenet_layers (int, optional, defaults to 2) —
Number of layers in the speech decoder pre-net.
speech_decoder_prenet_units (int, optional, defaults to 256) —
Dimensionality of the layers in the speech decoder pre-net.
speech_decoder_prenet_dropout (float, optional, defaults to 0.5) —
The dropout probability for the speech decoder pre-net layers.
speaker_embedding_dim (int, optional, defaults to 512) —
Dimensionality of the XVector embedding vectors.
speech_decoder_postnet_layers (int, optional, defaults to 5) —
Number of layers in the speech decoder post-net.
speech_decoder_postnet_units (int, optional, defaults to 256) —
Dimensionality of the layers in the speech decoder post-net.
speech_decoder_postnet_kernel (int, optional, defaults to 5) —
Number of convolutional filter channels in the speech decoder post-net.
speech_decoder_postnet_dropout (float, optional, defaults to 0.5) —
The dropout probability for the speech decoder post-net layers.
reduction_factor (int, optional, defaults to 2) —
Spectrogram length reduction factor for the speech decoder inputs.
max_speech_positions (int, optional, defaults to 4000) —
The maximum sequence length of speech features that this model might ever be used with.
max_text_positions (int, optional, defaults to 450) —
The maximum sequence length of text features that this model might ever be used with.
encoder_max_relative_position (int, optional, defaults to 160) —
Maximum distance for relative position embedding in the encoder.
use_guided_attention_loss (bool, optional, defaults to True) —
Whether to apply guided attention loss while training the TTS model.
guided_attention_loss_num_heads (int, optional, defaults to 2) —
Number of attention heads the guided attention loss will be applied to. Use -1 to apply this loss to all
attention heads.
guided_attention_loss_sigma (float, optional, defaults to 0.4) —
Standard deviation for guided attention loss.
guided_attention_loss_scale (float, optional, defaults to 10.0) —
Scaling coefficient for guided attention loss (also known as lambda).
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a SpeechT5Model. It is used to instantiate a
SpeechT5 model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the SpeechT5
microsoft/speecht5_asr architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SpeechT5Model, SpeechT5Config
# Initializing a "microsoft/speecht5_asr" style configuration
configuration = SpeechT5Config()
# Initializing a model (with random weights) from the "microsoft/speecht5_asr" style configuration
model = SpeechT5Model(configuration)
# Accessing the model configuration
configuration = model.config
SpeechT5HifiGanConfig
class transformers.SpeechT5HifiGanConfig
<
source
>
(
model_in_dim = 80
sampling_rate = 16000
upsample_initial_channel = 512
upsample_rates = [4, 4, 4, 4]
upsample_kernel_sizes = [8, 8, 8, 8]
resblock_kernel_sizes = [3, 7, 11]
resblock_dilation_sizes = [[1, 3, 5], [1, 3, 5], [1, 3, 5]]
initializer_range = 0.01
leaky_relu_slope = 0.1
normalize_before = True
**kwargs
)
Parameters
model_in_dim (int, optional, defaults to 80) —
The number of frequency bins in the input log-mel spectrogram.
sampling_rate (int, optional, defaults to 16000) —
The sampling rate at which the output audio will be generated, expressed in hertz (Hz).
upsample_initial_channel (int, optional, defaults to 512) —
The number of input channels into the upsampling network.
upsample_rates (Tuple[int] or List[int], optional, defaults to [4, 4, 4, 4]) —
A tuple of integers defining the stride of each 1D convolutional layer in the upsampling network. The
length of upsample_rates defines the number of convolutional layers and has to match the length of
upsample_kernel_sizes.
upsample_kernel_sizes (Tuple[int] or List[int], optional, defaults to [8, 8, 8, 8]) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the upsampling network. The
length of upsample_kernel_sizes defines the number of convolutional layers and has to match the length of
upsample_rates.
resblock_kernel_sizes (Tuple[int] or List[int], optional, defaults to [3, 7, 11]) —
A tuple of integers defining the kernel sizes of the 1D convolutional layers in the multi-receptive field
fusion (MRF) module.
resblock_dilation_sizes (Tuple[Tuple[int]] or List[List[int]], optional, defaults to [[1, 3, 5], [1, 3, 5], [1, 3, 5]]) —
A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the
multi-receptive field fusion (MRF) module.
initializer_range (float, optional, defaults to 0.01) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
leaky_relu_slope (float, optional, defaults to 0.1) —
The angle of the negative slope used by the leaky ReLU activation.
normalize_before (bool, optional, defaults to True) —
Whether or not to normalize the spectrogram before vocoding using the vocoder’s learned mean and variance.
This is the configuration class to store the configuration of a SpeechT5HifiGanModel. It is used to instantiate
a SpeechT5 HiFi-GAN vocoder model according to the specified arguments, defining the model architecture.
Instantiating a configuration with the defaults will yield a similar configuration to that of the SpeechT5
microsoft/speecht5_hifigan architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SpeechT5HifiGan, SpeechT5HifiGanConfig
# Initializing a "microsoft/speecht5_hifigan" style configuration
configuration = SpeechT5HifiGanConfig()
# Initializing a model (with random weights) from the "microsoft/speecht5_hifigan" style configuration
model = SpeechT5HifiGan(configuration)
# Accessing the model configuration
configuration = model.config
SpeechT5Tokenizer
class transformers.SpeechT5Tokenizer
<
source
>
(
vocab_file
bos_token = '<s>'
eos_token = '</s>'
unk_token = '<unk>'
pad_token = '<pad>'
sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
**kwargs
)
Parameters
vocab_file (str) —
SentencePiece file (generally has a .spm extension) that
contains the vocabulary necessary to instantiate a tokenizer.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
bos_token (str, optional, defaults to "<s>") —
The begin of sequence token.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
sp_model_kwargs (dict, optional) —
Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for
SentencePiece can be used, among other things,
to set:
enable_sampling: Enable subword regularization.
nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.
nbest_size = {0,1}: No sampling is performed.
nbest_size > 1: samples from the nbest_size results.
nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.
sp_model (SentencePieceProcessor) —
The SentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct a SpeechT5 tokenizer. Based on SentencePiece.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
__call__
<
source
>
(
text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None
text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
is_split_into_words: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
→
BatchEncoding
Parameters
text (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings
(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set
is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
text_pair_target (str, List[str], List[List[str]], optional) —
The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or a
list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),
you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
return_token_type_ids (bool, optional) —
Whether to return token type IDs. If left to the default, will return the token type IDs according to
the specific tokenizer’s default, defined by the return_outputs attribute.
What are token type IDs?
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer’s default, defined by the return_outputs attribute.
What are attention masks?
return_overflowing_tokens (bool, optional, defaults to False) —
Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batch
of pairs) is provided with truncation_strategy = longest_first or True, an error is raised instead
of returning overflowing tokens.
return_special_tokens_mask (bool, optional, defaults to False) —
Whether or not to return special tokens mask information.
return_offsets_mapping (bool, optional, defaults to False) —
Whether or not to return (char_start, char_end) for each token.
This is only available on fast tokenizers inheriting from PreTrainedTokenizerFast, if using
Python’s tokenizer, this method will raise NotImplementedError.
return_length (bool, optional, defaults to False) —
Whether or not to return the lengths of the encoded inputs.
verbose (bool, optional, defaults to True) —
Whether or not to print more information and warnings.
**kwargs — passed to the self.tokenize() method
Returns
BatchEncoding
A BatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
What are input IDs?
token_type_ids — List of token type ids to be fed to a model (when return_token_type_ids=True or
if “token_type_ids” is in self.model_input_names).
What are token type IDs?
attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=True or if “attention_mask” is in self.model_input_names).
What are attention masks?
overflowing_tokens — List of overflowing tokens sequences (when a max_length is specified and
return_overflowing_tokens=True).
num_truncated_tokens — Number of tokens truncated (when a max_length is specified and
return_overflowing_tokens=True).
special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifying
regular sequence tokens (when add_special_tokens=True and return_special_tokens_mask=True).
length — The length of the inputs (when return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
decode
<
source
>
(
token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
**kwargs
)
→
str
Parameters
token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces. If None, will default to
self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
str
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
batch_decode
<
source
>
(
sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')]
skip_special_tokens: bool = False
clean_up_tokenization_spaces: bool = None
**kwargs
)
→
List[str]
Parameters
sequences (Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]) —
List of tokenized input ids. Can be obtained using the __call__ method.
skip_special_tokens (bool, optional, defaults to False) —
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (bool, optional) —
Whether or not to clean up the tokenization spaces. If None, will default to
self.clean_up_tokenization_spaces.
kwargs (additional keyword arguments, optional) —
Will be passed to the underlying model specific decode method.
Returns
List[str]
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
SpeechT5FeatureExtractor
class transformers.SpeechT5FeatureExtractor
<
source
>
(
feature_size: int = 1
sampling_rate: int = 16000
padding_value: float = 0.0
do_normalize: bool = False
num_mel_bins: int = 80
hop_length: int = 16
win_length: int = 64
win_function: str = 'hann_window'
frame_signal_scale: float = 1.0
fmin: float = 80
fmax: float = 7600
mel_floor: float = 1e-10
reduction_factor: int = 2
return_attention_mask: bool = True
**kwargs
)
Parameters
feature_size (int, optional, defaults to 1) —
The feature dimension of the extracted features.
sampling_rate (int, optional, defaults to 16000) —
The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
padding_value (float, optional, defaults to 0.0) —
The value that is used to fill the padding values.
do_normalize (bool, optional, defaults to False) —
Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
improve the performance for some models.
num_mel_bins (int, optional, defaults to 80) —
The number of mel-frequency bins in the extracted spectrogram features.
hop_length (int, optional, defaults to 16) —
Number of ms between windows. Otherwise referred to as “shift” in many papers.
win_length (int, optional, defaults to 64) —
Number of ms per window.
win_function (str, optional, defaults to "hann_window") —
Name for the window function used for windowing, must be accessible via torch.{win_function}
frame_signal_scale (float, optional, defaults to 1.0) —
Constant multiplied in creating the frames before applying DFT. This argument is deprecated.
fmin (float, optional, defaults to 80) —
Minimum mel frequency in Hz.
fmax (float, optional, defaults to 7600) —
Maximum mel frequency in Hz.
mel_floor (float, optional, defaults to 1e-10) —
Minimum value of mel frequency banks.
reduction_factor (int, optional, defaults to 2) —
Spectrogram length reduction factor. This argument is deprecated.
return_attention_mask (bool, optional, defaults to True) —
Whether or not call() should return attention_mask.
Constructs a SpeechT5 feature extractor.
This class can pre-process a raw speech signal by (optionally) normalizing to zero-mean unit-variance, for use by
the SpeechT5 speech encoder prenet.
This class can also extract log-mel filter bank features from raw speech, for use by the SpeechT5 speech decoder
prenet.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
audio: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]], NoneType] = None
audio_target: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]], NoneType] = None
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
max_length: typing.Optional[int] = None
truncation: bool = False
pad_to_multiple_of: typing.Optional[int] = None
return_attention_mask: typing.Optional[bool] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
sampling_rate: typing.Optional[int] = None
**kwargs
)
Parameters
audio (np.ndarray, List[float], List[np.ndarray], List[List[float]], optional) —
The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. This outputs waveform features. Must
be mono channel audio, not stereo, i.e. single float per timestep.
audio_target (np.ndarray, List[float], List[np.ndarray], List[List[float]], optional) —
The sequence or batch of sequences to be processed as targets. Each sequence can be a numpy array, a
list of float values, a list of numpy arrays or a list of list of float values. This outputs log-mel
spectrogram features.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
max_length (int, optional) —
Maximum length of the returned list and optionally padding length (see above).
truncation (bool) —
Activates truncation to cut input sequences longer than max_length to max_length.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
return_attention_mask (bool, optional) —
Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific feature_extractor’s default.
What are attention masks?
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
sampling_rate (int, optional) —
The sampling rate at which the audio or audio_target input was sampled. It is strongly recommended
to pass sampling_rate at the forward call to prevent silent errors.
Main method to featurize and prepare for the model one or several sequence(s).
Pass in a value for audio to extract waveform features. Pass in a value for audio_target to extract log-mel
spectrogram features.
SpeechT5Processor
class transformers.SpeechT5Processor
<
source
>
(
feature_extractor
tokenizer
)
Parameters
feature_extractor (SpeechT5FeatureExtractor) —
An instance of SpeechT5FeatureExtractor. The feature extractor is a required input.
tokenizer (SpeechT5Tokenizer) —
An instance of SpeechT5Tokenizer. The tokenizer is a required input.
Constructs a SpeechT5 processor which wraps a feature extractor and a tokenizer into a single processor.
SpeechT5Processor offers all the functionalities of SpeechT5FeatureExtractor and SpeechT5Tokenizer. See
the docstring of call() and decode() for more information.
__call__
<
source
>
(
*args
**kwargs
)
Processes audio and text input, as well as audio and text targets.
You can process audio by using the argument audio, or process audio targets by using the argument
audio_target. This forwards the arguments to SpeechT5FeatureExtractor’s
call().
You can process text by using the argument text, or process text labels by using the argument text_target.
This forwards the arguments to SpeechT5Tokenizer’s call().
Valid input combinations are:
text only
audio only
text_target only
audio_target only
text and audio_target
audio and audio_target
text and text_target
audio and text_target
Please refer to the docstring of the above two methods for more information.
pad
<
source
>
(
*args
**kwargs
)
Collates the audio and text inputs, as well as their targets, into a padded batch.
Audio inputs are padded by SpeechT5FeatureExtractor’s pad(). Text inputs are padded
by SpeechT5Tokenizer’s pad().
Valid input combinations are:
input_ids only
input_values only
labels only, either log-mel spectrograms or text tokens
input_ids and log-mel spectrogram labels
input_values and text labels
Please refer to the docstring of the above two methods for more information.
from_pretrained
<
source
>
(
pretrained_model_name_or_path: typing.Union[str, os.PathLike]
cache_dir: typing.Union[str, os.PathLike, NoneType] = None
force_download: bool = False
local_files_only: bool = False
token: typing.Union[bool, str, NoneType] = None
revision: str = 'main'
**kwargs
)
Parameters
pretrained_model_name_or_path (str or os.PathLike) —
This can be either:
a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g., ./my_model_directory/.
a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the
methods above for more information.
save_pretrained
<
source
>
(
save_directory
push_to_hub: bool = False
**kwargs
)
Parameters
save_directory (str or os.PathLike) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist).
push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace).
kwargs (Dict[str, Any], optional) —
Additional key word arguments passed along to the push_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it
can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and
save_pretrained(). Please refer to the docstrings of the
methods above for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to SpeechT5Tokenizer’s batch_decode(). Please refer
to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to SpeechT5Tokenizer’s decode(). Please refer to
the docstring of this method for more information.
SpeechT5Model
class transformers.SpeechT5Model
<
source
>
(
config: SpeechT5Config
encoder: typing.Optional[torch.nn.modules.module.Module] = None
decoder: typing.Optional[torch.nn.modules.module.Module] = None
)
Parameters
config (SpeechT5Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
encoder (SpeechT5EncoderWithSpeechPrenet or SpeechT5EncoderWithTextPrenet or None) —
The Transformer encoder module that applies the appropiate speech or text encoder prenet. If None,
SpeechT5EncoderWithoutPrenet will be used and the input_values are assumed to be hidden states.
decoder (SpeechT5DecoderWithSpeechPrenet or SpeechT5DecoderWithTextPrenet or None) —
The Transformer decoder module that applies the appropiate speech or text decoder prenet. If None,
SpeechT5DecoderWithoutPrenet will be used and the decoder_input_values are assumed to be hidden
states.
The bare SpeechT5 Encoder-Decoder Model outputting raw hidden-states without any specific pre- or post-nets.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_values: typing.Optional[torch.Tensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
speaker_embeddings: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will
also be used by default.
If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.FloatTensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_values (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor
of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing
decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is
used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is
useful if you want more control over how to convert decoder_input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
input_values (torch.Tensor of shape (batch_size, sequence_length)) —
Depending on which encoder is being used, the input_values are either: float values of the input raw
speech waveform, or indices of input sequence tokens in the vocabulary, or hidden states.
decoder_input_values (torch.Tensor of shape (batch_size, target_sequence_length), optional) —
Depending on which decoder is being used, the decoder_input_values are either: float values of log-mel
filterbank features extracted from the raw speech waveform, or indices of decoder input sequence tokens in
the vocabulary, or hidden states.
speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) —
Tensor containing the speaker embeddings.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SpeechT5Config) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The SpeechT5Model forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
SpeechT5ForSpeechToText
class transformers.SpeechT5ForSpeechToText
<
source
>
(
config: SpeechT5Config
)
Parameters
config (SpeechT5Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
SpeechT5 Model with a speech encoder and a text decoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.LongTensor] = None
)
→
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will
also be used by default.
If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.FloatTensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_values (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor
of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing
decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is
used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is
useful if you want more control over how to convert decoder_input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the SpeechT5Processor should be used for padding
and conversion into a tensor of type torch.FloatTensor. See SpeechT5Processor.call() for details.
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using SpeechT5Tokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
SpeechT5 uses the eos_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is
only computed for the tokens with labels in [0, ..., config.vocab_size].
Label indices can be obtained using SpeechT5Tokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SpeechT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The SpeechT5ForSpeechToText forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import SpeechT5Processor, SpeechT5ForSpeechToText
from datasets import load_dataset
dataset = load_dataset(
... "hf-internal-testing/librispeech_asr_demo", "clean", split="validation"
... ) # doctest: +IGNORE_RESULT
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_asr")
model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr")
# audio file is decoded on the fly
inputs = processor(audio=dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
predicted_ids = model.generate(**inputs, max_length=100)
# transcribe speech
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
'mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'
Copied
inputs["labels"] = processor(text_target=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
19.68
SpeechT5ForTextToSpeech
class transformers.SpeechT5ForTextToSpeech
<
source
>
(
config: SpeechT5Config
)
Parameters
config (SpeechT5Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
SpeechT5 Model with a text encoder and a speech decoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_values: typing.Optional[torch.FloatTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
speaker_embeddings: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
stop_labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)
Parameters
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will
also be used by default.
If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.FloatTensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_values (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor
of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing
decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is
used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is
useful if you want more control over how to convert decoder_input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. The batch_size should be 1 currently.
Indices can be obtained using SpeechT5Tokenizer. See encode() and
call() for details.
What are input IDs?
decoder_input_values (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins)) —
Float values of input mel spectrogram.
SpeechT5 uses an all-zero spectrum as the starting token for decoder_input_values generation. If
past_key_values is used, optionally only the last decoder_input_values have to be input (see
past_key_values).
speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) —
Tensor containing the speaker embeddings.
labels (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins), optional) —
Float values of target mel spectrogram. Timesteps set to -100.0 are ignored (masked) for the loss
computation. Spectrograms can be obtained using SpeechT5Processor. See SpeechT5Processor.call()
for details.
Returns
transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSpectrogramOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SpeechT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Spectrogram generation loss.
spectrogram (torch.FloatTensor of shape (batch_size, sequence_length, num_bins)) — The predicted spectrogram.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The SpeechT5ForTextToSpeech forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan, set_seed
import torch
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
inputs = processor(text="Hello, my dog is cute", return_tensors="pt")
speaker_embeddings = torch.zeros((1, 512)) # or load xvectors from a file
set_seed(555) # make deterministic
# generate speech
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
speech.shape
torch.Size([15872])
generate_speech
<
source
>
(
input_ids: LongTensor
speaker_embeddings: typing.Optional[torch.FloatTensor] = None
threshold: float = 0.5
minlenratio: float = 0.0
maxlenratio: float = 20.0
vocoder: typing.Optional[torch.nn.modules.module.Module] = None
output_cross_attentions: bool = False
)
→
tuple(torch.FloatTensor) comprising various elements depending on the inputs
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. The batch_size should be 1 currently.
Indices can be obtained using SpeechT5Tokenizer. See encode() and
call() for details.
What are input IDs?
speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) —
Tensor containing the speaker embeddings.
threshold (float, optional, defaults to 0.5) —
The generated sequence ends when the predicted stop token probability exceeds this value.
minlenratio (float, optional, defaults to 0.0) —
Used to calculate the minimum required length for the output sequence.
maxlenratio (float, optional, defaults to 20.0) —
Used to calculate the maximum allowed length for the output sequence.
vocoder (nn.Module, optional, defaults to None) —
The vocoder that converts the mel spectrogram into a speech waveform. If None, the output is the mel
spectrogram.
output_cross_attentions (bool, optional, defaults to False) —
Whether or not to return the attentions tensors of the decoder’s cross-attention layers.
Returns
tuple(torch.FloatTensor) comprising various elements depending on the inputs
spectrogram (optional, returned when no vocoder is provided) torch.FloatTensor of shape
(output_sequence_length, config.num_mel_bins) — The predicted log-mel spectrogram.
waveform (optional, returned when a vocoder is provided) torch.FloatTensor of shape
(num_frames,) — The predicted speech waveform.
cross_attentions (optional, returned when output_cross_attentions is True) torch.FloatTensor
of shape (config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length) — The outputs of the decoder’s cross-attention layers.
Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into a
speech waveform using a vocoder.
SpeechT5ForSpeechToSpeech
class transformers.SpeechT5ForSpeechToSpeech
<
source
>
(
config: SpeechT5Config
)
Parameters
config (SpeechT5Config) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
SpeechT5 Model with a speech encoder and a speech decoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.LongTensor] = None
decoder_input_values: typing.Optional[torch.FloatTensor] = None
decoder_attention_mask: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
speaker_embeddings: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.FloatTensor] = None
stop_labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)
Parameters
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, attention_mask should
not be passed to avoid degraded performance when doing batched inference. For such models
input_values should simply be padded with 0 and passed without attention_mask. Be aware that these
models also yield slightly different results depending on whether input_values is padded or not.
decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_values. Causal mask will
also be used by default.
If you want to change padding behavior, you should read SpeechT5Decoder._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
head_mask (torch.FloatTensor of shape (encoder_layers, encoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) —
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_values (those
that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_values of shape (batch_size, sequence_length). decoder_inputs_embeds (torch.FloatTensor
of shape (batch_size, target_sequence_length, hidden_size), optional): Optionally, instead of passing
decoder_input_values you can choose to directly pass an embedded representation. If past_key_values is
used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is
useful if you want more control over how to convert decoder_input_values indices into associated vectors
than the model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the SpeechT5Processor should be used for padding
and conversion into a tensor of type torch.FloatTensor. See SpeechT5Processor.call() for details.
decoder_input_values (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins)) —
Float values of input mel spectrogram.
SpeechT5 uses an all-zero spectrum as the starting token for decoder_input_values generation. If
past_key_values is used, optionally only the last decoder_input_values have to be input (see
past_key_values).
speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) —
Tensor containing the speaker embeddings.
labels (torch.FloatTensor of shape (batch_size, sequence_length, config.num_mel_bins), optional) —
Float values of target mel spectrogram. Spectrograms can be obtained using SpeechT5Processor. See
SpeechT5Processor.call() for details.
Returns
transformers.modeling_outputs.Seq2SeqSpectrogramOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSpectrogramOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SpeechT5Config) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Spectrogram generation loss.
spectrogram (torch.FloatTensor of shape (batch_size, sequence_length, num_bins)) — The predicted spectrogram.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The SpeechT5ForSpeechToSpeech forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan, set_seed
from datasets import load_dataset
import torch
dataset = load_dataset(
... "hf-internal-testing/librispeech_asr_demo", "clean", split="validation"
... ) # doctest: +IGNORE_RESULT
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc")
model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
# audio file is decoded on the fly
inputs = processor(audio=dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
speaker_embeddings = torch.zeros((1, 512)) # or load xvectors from a file
set_seed(555) # make deterministic
# generate speech
speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder)
speech.shape
torch.Size([77824])
generate_speech
<
source
>
(
input_values: FloatTensor
speaker_embeddings: typing.Optional[torch.FloatTensor] = None
threshold: float = 0.5
minlenratio: float = 0.0
maxlenratio: float = 20.0
vocoder: typing.Optional[torch.nn.modules.module.Module] = None
output_cross_attentions: bool = False
)
→
tuple(torch.FloatTensor) comprising various elements depending on the inputs
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. The batch_size should be 1 currently.
Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or
a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array
into input_values, the SpeechT5Processor should be used for padding and conversion into a tensor
of type torch.FloatTensor. See SpeechT5Processor.call() for details.
speaker_embeddings (torch.FloatTensor of shape (batch_size, config.speaker_embedding_dim), optional) —
Tensor containing the speaker embeddings.
threshold (float, optional, defaults to 0.5) —
The generated sequence ends when the predicted stop token probability exceeds this value.
minlenratio (float, optional, defaults to 0.0) —
Used to calculate the minimum required length for the output sequence.
maxlenratio (float, optional, defaults to 20.0) —
Used to calculate the maximum allowed length for the output sequence.
vocoder (nn.Module, optional, defaults to None) —
The vocoder that converts the mel spectrogram into a speech waveform. If None, the output is the mel
spectrogram.
output_cross_attentions (bool, optional, defaults to False) —
Whether or not to return the attentions tensors of the decoder’s cross-attention layers.
Returns
tuple(torch.FloatTensor) comprising various elements depending on the inputs
spectrogram (optional, returned when no vocoder is provided) torch.FloatTensor of shape
(output_sequence_length, config.num_mel_bins) — The predicted log-mel spectrogram.
waveform (optional, returned when a vocoder is provided) torch.FloatTensor of shape
(num_frames,) — The predicted speech waveform.
cross_attentions (optional, returned when output_cross_attentions is True) torch.FloatTensor
of shape (config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length) — The outputs of the decoder’s cross-attention layers.
Converts a raw speech waveform into a sequence of mel spectrograms, which are subsequently turned back into a
speech waveform using a vocoder.
SpeechT5HifiGan
class transformers.SpeechT5HifiGan
<
source
>
(
config: SpeechT5HifiGanConfig
)
Parameters
config (SpeechT5HifiGanConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
HiFi-GAN vocoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
spectrogram: FloatTensor
)
→
torch.FloatTensor
Parameters
spectrogram (torch.FloatTensor) —
Tensor containing the log-mel spectrograms. Can be batched and of shape (batch_size, sequence_length, config.model_in_dim), or un-batched and of shape (sequence_length, config.model_in_dim).
Returns
torch.FloatTensor
Tensor containing the speech waveform. If the input spectrogram is batched, will be of
shape (batch_size, num_frames,). If un-batched, will be of shape (num_frames,).
Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch
of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech
waveform.
←Speech2Text2
UniSpeech→
SpeechT5
Overview
SpeechT5Config
SpeechT5HifiGanConfig
SpeechT5Tokenizer
SpeechT5FeatureExtractor
SpeechT5Processor
SpeechT5Model
SpeechT5ForSpeechToText
SpeechT5ForTextToSpeech
SpeechT5ForSpeechToSpeech
SpeechT5HifiGan
|
UniSpeech-SAT
Overview
The UniSpeech-SAT model was proposed in UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware
Pre-Training by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen,
Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu .
The abstract from the paper is the following:
Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled
data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in
speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In
this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are
introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to
the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function.
Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where
additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed
methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves
state-of-the-art performance in universal representation learning, especially for speaker identification oriented
tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training
dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.
Tips:
UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Please use Wav2Vec2Processor for the feature extraction.
UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using Wav2Vec2CTCTokenizer.
UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
This model was contributed by patrickvonplaten. The Authors’ code can be
found here.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
UniSpeechSatConfig
class transformers.UniSpeechSatConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
feat_quantizer_dropout = 0.0
final_dropout = 0.1
layerdrop = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (512, 512, 512, 512, 512, 512, 512)
conv_stride = (5, 2, 2, 2, 2, 2, 2)
conv_kernel = (10, 3, 3, 3, 3, 2, 2)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
do_stable_layer_norm = False
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
num_codevectors_per_group = 320
num_codevector_groups = 2
contrastive_logits_temperature = 0.1
num_negatives = 100
codevector_dim = 256
proj_codevector_dim = 256
diversity_loss_weight = 0.1
ctc_loss_reduction = 'mean'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
tdnn_dim = (512, 512, 512, 512, 1500)
tdnn_kernel = (5, 3, 3, 1, 1)
tdnn_dilation = (1, 2, 3, 1, 1)
xvector_output_dim = 512
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
num_clusters = 504
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the UniSpeechSat model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling UniSpeechSatModel. Vocabulary size of the model. Defines the
different tokens that can be represented by the inputs_ids passed to the forward method of
UniSpeechSatModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of UniSpeechSatForCTC.
layerdrop (float, optional, defaults to 0.1) —
The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
details.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for quantized feature encoder states.
conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
do_stable_layer_norm (bool, optional, defaults to False) —
Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
num_codevectors_per_group (int, optional, defaults to 320) —
Number of entries in each quantization codebook (group).
num_codevector_groups (int, optional, defaults to 2) —
Number of codevector groups for product codevector quantization.
contrastive_logits_temperature (float, optional, defaults to 0.1) —
The temperature kappa in the contrastive loss.
feat_quantizer_dropout (float, optional, defaults to 0.0) —
The dropout probabilitiy for the output of the feature encoder that’s used by the quantizer.
num_negatives (int, optional, defaults to 100) —
Number of negative samples for the contrastive loss.
codevector_dim (int, optional, defaults to 256) —
Dimensionality of the quantized feature vectors.
proj_codevector_dim (int, optional, defaults to 256) —
Dimensionality of the final projection of both the quantized and the transformer features.
diversity_loss_weight (int, optional, defaults to 0.1) —
The weight of the codebook diversity loss component.
ctc_loss_reduction (str, optional, defaults to "mean") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of UniSpeechSatForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of UniSpeechSatForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of UniSpeechSatForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
tdnn_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 1500)) —
A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN
module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.
tdnn_kernel (Tuple[int] or List[int], optional, defaults to (5, 3, 3, 1, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the
XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.
tdnn_dilation (Tuple[int] or List[int], optional, defaults to (1, 2, 3, 1, 1)) —
A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the
XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.
xvector_output_dim (int, optional, defaults to 512) —
Dimensionality of the XVector embedding vectors.
This is the configuration class to store the configuration of a UniSpeechSatModel. It is used to instantiate an
UniSpeechSat model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the UniSpeechSat
microsoft/unispeech-sat-base-100h-libri-ft
architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import UniSpeechSatModel, UniSpeechSatConfig
# Initializing a UniSpeechSat microsoft/unispeech-sat-base-100h-libri-ft style configuration
configuration = UniSpeechSatConfig()
# Initializing a model from the microsoft/unispeech-sat-base-100h-libri-ft style configuration
model = UniSpeechSatModel(configuration)
# Accessing the model configuration
configuration = model.config
UniSpeechSat specific outputs
class transformers.models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput
<
source
>
(
loss: typing.Optional[torch.FloatTensor] = None
logits: FloatTensor = None
projected_states: FloatTensor = None
projected_quantized_states: FloatTensor = None
codevector_perplexity: FloatTensor = None
hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None
)
Parameters
loss (optional, returned when model is in train mode, torch.FloatTensor of shape (1,)) —
Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) —
Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) —
Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) —
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
Output type of UniSpeechSatForPreTrainingOutput, with potential hidden states and attentions.
UniSpeechSatModel
class transformers.UniSpeechSatModel
<
source
>
(
config: UniSpeechSatConfig
)
Parameters
config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare UniSpeechSat Model transformer outputting raw hidden-states without any specific head on top.
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
microsoft/unispeech-sat-base-100h-libri-ft,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechSatConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechSatModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, UniSpeechSatModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
model = UniSpeechSatModel.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 768]
UniSpeechSatForCTC
class transformers.UniSpeechSatForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeechSat Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
microsoft/unispeech-sat-base-100h-libri-ft,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechSatConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechSatForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, UniSpeechSatForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
model = UniSpeechSatForCTC.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILDER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
39.88
UniSpeechSatForSequenceClassification
class transformers.UniSpeechSatForSequenceClassification
<
source
>
(
config
)
Parameters
config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeechSat Model with a sequence classification head on top (a linear layer over the pooled output) for tasks
like SUPERB Keyword Spotting.
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
microsoft/unispeech-sat-base-100h-libri-ft,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechSatConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechSatForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, UniSpeechSatForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
model = UniSpeechSatForSequenceClassification.from_pretrained("microsoft/unispeech-sat-base-100h-libri-ft")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
UniSpeechSatForAudioFrameClassification
class transformers.UniSpeechSatForAudioFrameClassification
<
source
>
(
config
)
Parameters
config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeech-SAT Model with a frame classification head on top for tasks like Speaker Diarization.
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
microsoft/unispeech-sat-base-100h-libri-ft,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechSatConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechSatForAudioFrameClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, UniSpeechSatForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-sat-base-plus-sd")
model = UniSpeechSatForAudioFrameClassification.from_pretrained("microsoft/unispeech-sat-base-plus-sd")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
with torch.no_grad():
... logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
labels[0].tolist()
[0, 0]
UniSpeechSatForXVector
class transformers.UniSpeechSatForXVector
<
source
>
(
config
)
Parameters
config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeech-SAT Model with an XVector feature extraction head on top for tasks like Speaker Verification.
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
microsoft/unispeech-sat-base-100h-libri-ft,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.XVectorOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechSatConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax.
embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechSatForXVector forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, UniSpeechSatForXVector
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-sat-base-plus-sv")
model = UniSpeechSatForXVector.from_pretrained("microsoft/unispeech-sat-base-plus-sv")
# audio file is decoded on the fly
inputs = feature_extractor(
... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
with torch.no_grad():
... embeddings = model(**inputs).embeddings
embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()
# the resulting embeddings can be used for cosine similarity-based retrieval
cosine_sim = torch.nn.CosineSimilarity(dim=-1)
similarity = cosine_sim(embeddings[0], embeddings[1])
threshold = 0.7 # the optimal threshold is dataset-dependent
if similarity < threshold:
... print("Speakers are not the same!")
round(similarity.item(), 2)
0.97
UniSpeechSatForPreTraining
class transformers.UniSpeechSatForPreTraining
<
source
>
(
config: UniSpeechSatConfig
)
Parameters
config (UniSpeechSatConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
UniSpeechSat Model with a quantizer and VQ head on top.
UniSpeechSat was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael
Auli.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as
microsoft/unispeech-sat-base-100h-libri-ft,
attention_mask should not be passed to avoid degraded performance when doing batched inference. For
such models input_values should simply be padded with 0 and passed without attention_mask. Be aware
that these models also yield slightly different results depending on whether input_values is padded or
not.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput or tuple(torch.FloatTensor)
A transformers.models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (UniSpeechSatConfig) and inputs.
loss (optional, returned when model is in train mode, torch.FloatTensor of shape (1,)) — Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official
paper . (classification) loss.
projected_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked
projected quantized states.
projected_quantized_states (torch.FloatTensor of shape (batch_size, sequence_length, config.proj_codevector_dim)) — Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive
target vectors for contrastive loss.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The UniSpeechSatForPreTraining forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoFeatureExtractor, UniSpeechSatForPreTraining
from transformers.models.unispeech_sat.modeling_unispeech_sat import _compute_mask_indices
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-sat-base")
model = UniSpeechSatForPreTraining.from_pretrained("microsoft/unispeech-sat-base")
# TODO: Add full pretraining example
←UniSpeech
Wav2Vec2→
UniSpeech-SAT
Overview
Documentation resources
UniSpeechSatConfig
UniSpeechSat specific outputs
UniSpeechSatModel
UniSpeechSatForCTC
UniSpeechSatForSequenceClassification
UniSpeechSatForAudioFrameClassification
UniSpeechSatForXVector
UniSpeechSatForPreTraining
|
FocalNet
Overview
The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision.
The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation.
The abstract from the paper is the following:
We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its
content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.
Tips:
One can use the AutoImageProcessor class to prepare images for the model.
This model was contributed by nielsr.
The original code can be found here.
FocalNetConfig
class transformers.FocalNetConfig
<
source
>
(
image_size = 224
patch_size = 4
num_channels = 3
embed_dim = 96
use_conv_embed = False
hidden_sizes = [192, 384, 768, 768]
depths = [2, 2, 6, 2]
focal_levels = [2, 2, 2, 2]
focal_windows = [3, 3, 3, 3]
hidden_act = 'gelu'
mlp_ratio = 4.0
hidden_dropout_prob = 0.0
drop_path_rate = 0.1
use_layerscale = False
layerscale_value = 0.0001
use_post_layernorm = False
use_post_layernorm_in_modulation = False
normalize_modulator = False
initializer_range = 0.02
layer_norm_eps = 1e-05
encoder_stride = 32
out_features = None
out_indices = None
**kwargs
)
Parameters
image_size (int, optional, defaults to 224) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 4) —
The size (resolution) of each patch in the embeddings layer.
num_channels (int, optional, defaults to 3) —
The number of input channels.
embed_dim (int, optional, defaults to 96) —
Dimensionality of patch embedding.
use_conv_embed (bool, optional, defaults to False) —
Whether to use convolutional embedding. The authors noted that using convolutional embedding usually
improve the performance, but it’s not used by default.
hidden_sizes (List[int], optional, defaults to [192, 384, 768, 768]) —
Dimensionality (hidden size) at each stage.
depths (list(int), optional, defaults to [2, 2, 6, 2]) —
Depth (number of layers) of each stage in the encoder.
focal_levels (list(int), optional, defaults to [2, 2, 2, 2]) —
Number of focal levels in each layer of the respective stages in the encoder.
focal_windows (list(int), optional, defaults to [3, 3, 3, 3]) —
Focal window size in each layer of the respective stages in the encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu",
"selu" and "gelu_new" are supported.
mlp_ratio (float, optional, defaults to 4.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality.
hidden_dropout_prob (float, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder.
drop_path_rate (float, optional, defaults to 0.1) —
Stochastic depth rate.
use_layerscale (bool, optional, defaults to False) —
Whether to use layer scale in the encoder.
layerscale_value (float, optional, defaults to 1e-4) —
The initial value of the layer scale.
use_post_layernorm (bool, optional, defaults to False) —
Whether to use post layer normalization in the encoder.
use_post_layernorm_in_modulation (bool, optional, defaults to False) —
Whether to use post layer normalization in the modulation layer.
normalize_modulator (bool, optional, defaults to False) —
Whether to normalize the modulator.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization layers.
encoder_stride (int, optional, defaults to 32) —
Factor to increase the spatial resolution by in the decoder head for masked image modeling.
out_features (List[str], optional) —
If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc.
(depending on how many stages the model has). If unset and out_indices is set, will default to the
corresponding stages. If unset and out_indices is unset, will default to the last stage.
out_indices (List[int], optional) —
If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how
many stages the model has). If unset and out_features is set, will default to the corresponding stages.
If unset and out_features is unset, will default to the last stage.
This is the configuration class to store the configuration of a FocalNetModel. It is used to instantiate a
FocalNet model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the FocalNet
microsoft/focalnet-tiny architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import FocalNetConfig, FocalNetModel
# Initializing a FocalNet microsoft/focalnet-tiny style configuration
configuration = FocalNetConfig()
# Initializing a model (with random weights) from the microsoft/focalnet-tiny style configuration
model = FocalNetModel(configuration)
# Accessing the model configuration
configuration = model.config
FocalNetModel
class transformers.FocalNetModel
<
source
>
(
config
add_pooling_layer = True
use_mask_token = False
)
Parameters
config (FocalNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare FocalNet Model outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.__call__() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or tuple(torch.FloatTensor)
A transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FocalNetConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The FocalNetModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FocalNetModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-tiny")
model = FocalNetModel.from_pretrained("microsoft/focalnet-tiny")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 49, 768]
FocalNetForMaskedImageModeling
class transformers.FocalNetForMaskedImageModeling
<
source
>
(
config
)
Parameters
config (FocalNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FocalNet Model with a decoder on top for masked image modeling.
This follows the same implementation as in SimMIM.
Note that we provide a script to pre-train this model on custom data in our examples
directory.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
bool_masked_pos: typing.Optional[torch.BoolTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.__call__() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0).
Returns
transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or tuple(torch.FloatTensor)
A transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FocalNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Masked image modeling (MLM) loss.
reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed pixel values.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The FocalNetForMaskedImageModeling forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, FocalNetConfig, FocalNetForMaskedImageModeling
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-base-simmim-window6-192")
config = FocalNetConfig()
model = FocalNetForMaskedImageModeling(config)
num_patches = (model.config.image_size // model.config.patch_size) ** 2
pixel_values = image_processor(images=image, return_tensors="pt").pixel_values
# create random boolean mask of shape (batch_size, num_patches)
bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool()
outputs = model(pixel_values, bool_masked_pos=bool_masked_pos)
loss, reconstructed_pixel_values = outputs.loss, outputs.logits
list(reconstructed_pixel_values.shape)
[1, 3, 192, 192]
FocalNetForImageClassification
class transformers.FocalNetForImageClassification
<
source
>
(
config
)
Parameters
config (FocalNetConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
FocalNet Model with an image classification head on top (a linear layer on top of the pooled output) e.g. for
ImageNet.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
AutoImageProcessor.__call__() for details.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or tuple(torch.FloatTensor)
A transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (FocalNetConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width).
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to
include the spatial dimensions.
The FocalNetForImageClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, FocalNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-tiny")
model = FocalNetForImageClassification.from_pretrained("microsoft/focalnet-tiny")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
tabby, tabby cat
←EfficientNet
GLPN→
FocalNet
Overview
FocalNetConfig
FocalNetModel
FocalNetForMaskedImageModeling
FocalNetForImageClassification
|
Pix2Struct
Overview
The Pix2Struct model was proposed in Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
The abstract from the paper is the following:
Visually-situated language is ubiquitous — sources range from textbooks with diagrams to web pages with images and tables, to mobile apps with buttons and forms. Perhaps due to this diversity, previous work has typically relied on domain-specific recipes with limited sharing of the underlying data, model architectures, and objectives. We present Pix2Struct, a pretrained image-to-text model for purely visual language understanding, which can be finetuned on tasks containing visually-situated language. Pix2Struct is pretrained by learning to parse masked screenshots of web pages into simplified HTML. The web, with its richness of visual elements cleanly reflected in the HTML structure, provides a large source of pretraining data well suited to the diversity of downstream tasks. Intuitively, this objective subsumes common pretraining signals such as OCR, language modeling, image captioning. In addition to the novel pretraining strategy, we introduce a variable-resolution input representation and a more flexible integration of language and vision inputs, where language prompts such as questions are rendered directly on top of the input image. For the first time, we show that a single pretrained model can achieve state-of-the-art results in six out of nine tasks across four domains: documents, illustrations, user interfaces, and natural images.
Tips:
Pix2Struct has been fine tuned on a variety of tasks and datasets, ranging from image captioning, visual question answering (VQA) over different inputs (books, charts, science diagrams), captioning UI components etc. The full list can be found in Table 1 of the paper.
We therefore advise you to use these models for the tasks they have been fine tuned on. For instance, if you want to use Pix2Struct for UI captioning, you should use the model fine tuned on the UI dataset. If you want to use Pix2Struct for image captioning, you should use the model fine tuned on the natural images captioning dataset and so on.
If you want to use the model to perform conditional text captioning, make sure to use the processor with add_special_tokens=False.
This model was contributed by ybelkada.
The original code can be found here.
Resources
Fine-tuning Notebook
All models
Pix2StructConfig
class transformers.Pix2StructConfig
<
source
>
(
text_config = None
vision_config = None
initializer_factor = 1.0
initializer_range = 0.02
is_vqa = False
tie_word_embeddings = False
is_encoder_decoder = True
**kwargs
)
Parameters
text_config (dict, optional) —
Dictionary of configuration options used to initialize Pix2StructTextConfig.
vision_config (dict, optional) —
Dictionary of configuration options used to initialize Pix2StructVisionConfig.
initializer_factor (float, optional, defaults to 1.0) —
Factor to multiply the initialization range with.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
is_vqa (bool, optional, defaults to False) —
Whether the model has been fine-tuned for VQA or not.
kwargs (optional) —
Dictionary of keyword arguments.
Pix2StructConfig is the configuration class to store the configuration of a
Pix2StructForConditionalGeneration. It is used to instantiate a Pix2Struct model according to the specified
arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will
yield a similar configuration to that of the Pix2Struct-base
google/pix2struct-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Pix2StructConfig, Pix2StructForConditionalGeneration
# Initializing a Pix2StructConfig with google/pix2struct-base style configuration
configuration = Pix2StructConfig()
# Initializing a Pix2StructForConditionalGeneration (with random weights) from the google/pix2struct-base style configuration
model = Pix2StructForConditionalGeneration(configuration)
# Accessing the model configuration
configuration = model.config
# We can also initialize a Pix2StructConfig from a Pix2StructTextConfig and a Pix2StructVisionConfig
# Initializing a Pix2Struct text and Pix2Struct vision configuration
config_text = Pix2StructTextConfig()
config_vision = Pix2StructVisionConfig()
config = Pix2StructConfig.from_text_vision_configs(config_text, config_vision)
from_text_vision_configs
<
source
>
(
text_config: Pix2StructTextConfig
vision_config: Pix2StructVisionConfig
**kwargs
)
→
Pix2StructConfig
Returns
Pix2StructConfig
An instance of a configuration object
Instantiate a Pix2StructConfig (or a derived class) from pix2struct text model configuration and pix2struct
vision model configuration.
Pix2StructTextConfig
class transformers.Pix2StructTextConfig
<
source
>
(
vocab_size = 50244
hidden_size = 768
d_kv = 64
d_ff = 2048
num_layers = 12
num_heads = 12
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
dropout_rate = 0.1
layer_norm_epsilon = 1e-06
initializer_factor = 1.0
dense_act_fn = 'gelu_new'
decoder_start_token_id = 0
use_cache = False
pad_token_id = 0
eos_token_id = 1
tie_word_embeddings = False
is_decoder = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 50244) —
Vocabulary size of the Pix2Struct text model. Defines the number of different tokens that can be
represented by the inputs_ids passed when calling Pix2StructTextModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
d_kv (int, optional, defaults to 64) —
Dimensionality of the key, query, value projections in each attention head.
d_ff (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
num_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance of the longer sequences for the bucket separation.
dropout_rate (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
layer_norm_epsilon (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
initializer_factor (float, optional, defaults to 1.0) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
dense_act_fn (Union[Callable, str], optional, defaults to "gelu_new") —
The non-linear activation function (function or string).
decoder_start_token_id (int, optional, defaults to 0) —
The id of the decoder_start_token_id token.
use_cache (bool, optional, defaults to False) —
Whether or not the model should return the last key/values attentions (not used by all models).
pad_token_id (int, optional, defaults to 0) —
The id of the padding token.
eos_token_id (int, optional, defaults to 1) —
The id of the end-of-sequence token.
This is the configuration class to store the configuration of a Pix2StructTextModel. It is used to instantiate
a Pix2Struct text model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Pix2Struct text decoder used by
the google/pix2struct-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Pix2StructTextConfig, Pix2StructTextModel
# Initializing a Pix2StructTextConfig with google/pix2struct-base style configuration
configuration = Pix2StructTextConfig()
# Initializing a Pix2StructTextModel (with random weights) from the google/pix2struct-base style configuration
model = Pix2StructTextModel(configuration)
# Accessing the model configuration
configuration = model.config
Pix2StructVisionConfig
class transformers.Pix2StructVisionConfig
<
source
>
(
hidden_size = 768
patch_embed_hidden_size = 768
d_ff = 2048
d_kv = 64
num_hidden_layers = 12
num_attention_heads = 12
dense_act_fn = 'gelu_new'
layer_norm_eps = 1e-06
dropout_rate = 0.0
attention_dropout = 0.0
initializer_range = 1e-10
initializer_factor = 1.0
seq_len = 4096
relative_attention_num_buckets = 32
relative_attention_max_distance = 128
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
patch_embed_hidden_size (int, optional, defaults to 768) —
Dimensionality of the input patch_embedding layer in the Transformer encoder.
d_ff (int, optional, defaults to 2048) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
d_kv (int, optional, defaults to 64) —
Dimensionality of the key, query, value projections per attention head.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
dense_act_fn (str or function, optional, defaults to "gelu_new") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" `"gelu" are supported.
layer_norm_eps (float, optional, defaults to 1e-6) —
The epsilon used by the layer normalization layers.
dropout_rate (float, optional, defaults to 0.0) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 1e-10) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
initializer_factor (`float“, optional, defaults to 1) —
A factor for initializing all weight matrices (should be kept to 1, used internally for initialization
testing).
seq_len (int, optional, defaults to 4096) —
Maximum sequence length (here number of patches) supported by the model.
relative_attention_num_buckets (int, optional, defaults to 32) —
The number of buckets to use for each attention layer.
relative_attention_max_distance (int, optional, defaults to 128) —
The maximum distance (in tokens) to use for each attention layer.
This is the configuration class to store the configuration of a Pix2StructVisionModel. It is used to
instantiate a Pix2Struct vision model according to the specified arguments, defining the model architecture.
Instantiating a configuration defaults will yield a similar configuration to that of the Pix2Struct-base
google/pix2struct-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import Pix2StructVisionConfig, Pix2StructVisionModel
# Initializing a Pix2StructVisionConfig with google/pix2struct-base style configuration
configuration = Pix2StructVisionConfig()
# Initializing a Pix2StructVisionModel (with random weights) from the google/pix2struct-base style configuration
model = Pix2StructVisionModel(configuration)
# Accessing the model configuration
configuration = model.config
Pix2StructProcessor
class transformers.Pix2StructProcessor
<
source
>
(
image_processor
tokenizer
)
Parameters
image_processor (Pix2StructImageProcessor) —
An instance of Pix2StructImageProcessor. The image processor is a required input.
tokenizer (Union[T5TokenizerFast, T5Tokenizer]) —
An instance of [‘T5TokenizerFast`] or [‘T5Tokenizer`]. The tokenizer is a required input.
Constructs a PIX2STRUCT processor which wraps a BERT tokenizer and PIX2STRUCT image processor into a single
processor.
Pix2StructProcessor offers all the functionalities of Pix2StructImageProcessor and T5TokenizerFast. See
the docstring of __call__() and decode() for more information.
batch_decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to Pix2StructTokenizerFast’s batch_decode().
Please refer to the docstring of this method for more information.
decode
<
source
>
(
*args
**kwargs
)
This method forwards all its arguments to Pix2StructTokenizerFast’s decode(). Please
refer to the docstring of this method for more information.
Pix2StructImageProcessor
class transformers.Pix2StructImageProcessor
<
source
>
(
do_convert_rgb: bool = True
do_normalize: bool = True
patch_size: typing.Dict[str, int] = None
max_patches: int = 2048
is_vqa: bool = False
**kwargs
)
Parameters
do_convert_rgb (bool, optional, defaults to True) —
Whether to convert the image to RGB.
do_normalize (bool, optional, defaults to True) —
Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess
method. According to Pix2Struct paper and code, the image is normalized with its own mean and standard
deviation.
patch_size (Dict[str, int], optional, defaults to {"height" -- 16, "width": 16}):
The patch size to use for the image. According to Pix2Struct paper and code, the patch size is 16x16.
max_patches (int, optional, defaults to 2048) —
The maximum number of patches to extract from the image as per the Pix2Struct
paper.
is_vqa (bool, optional, defaults to False) —
Whether or not the image processor is for the VQA task. If True and header_text is passed in, text is
rendered onto the input images.
Constructs a Pix2Struct image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
header_text: typing.Optional[str] = None
do_convert_rgb: bool = None
do_normalize: typing.Optional[bool] = None
max_patches: typing.Optional[int] = None
patch_size: typing.Union[typing.Dict[str, int], NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image to preprocess.
header_text (Union[List[str], str], optional) —
Text to render as a header. Only has an effect if image_processor.is_vqa is True.
do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) —
Whether to convert the image to RGB.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
max_patches (int, optional, defaults to self.max_patches) —
Maximum number of patches to extract.
patch_size (dict, optional, defaults to self.patch_size) —
Dictionary containing the patch height and width.
return_tensors (str or TensorType, optional) —
The type of tensors to return. Can be one of:
Unset: Return a list of np.ndarray.
TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor.
TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor.
TensorType.NUMPY or 'np': Return a batch of type np.ndarray.
TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray.
Preprocess an image or batch of images. The processor first computes the maximum possible number of
aspect-ratio preserving patches of size patch_size that can be extracted from the image. It then pads the
image with zeros to make the image respect the constraint of max_patches. Before extracting the patches the
images are standardized following the tensorflow implementation of per_image_standardization
(https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization).
Pix2StructTextModel
class transformers.Pix2StructTextModel
<
source
>
(
config
)
Parameters
config (Union[Pix2StructConfig, Pix2StructTextConfig]) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The standalone text decoder of Pix2Struct
The Pix2Struct model was proposed in Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu,
Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. It’s an encoder decoder
transformer pre-trained in a image-to-text setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids = None
attention_mask = None
encoder_hidden_states = None
encoder_attention_mask = None
inputs_embeds = None
head_mask = None
cross_attn_head_mask = None
past_key_values = None
use_cache = None
output_attentions = None
output_hidden_states = None
labels = None
return_dict = None
**kwargs
)
→
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary. Pix2StructText is a model with relative position
embeddings so you should be able to pad the inputs on both the right and the left.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for detail.
What are input IDs?
To know more on how to prepare input_ids for pretraining take a look a Pix2StructText
Training.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Pix2StructText uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at Pix2StructText
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention layers. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Pix2StructConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Cross attentions weights after the attention softmax, used to compute the weighted average in the
cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
The Pix2StructTextModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, Pix2StructTextModel
processor = AutoProcessor.from_pretrained("google/pix2struct-textcaps-base")
model = Pix2StructTextModel.from_pretrained("google/pix2struct-textcaps-base")
inputs = processor(text="Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
loss = outputs.loss
Pix2StructVisionModel
class transformers.Pix2StructVisionModel
<
source
>
(
config: Pix2StructConfig
)
Parameters
config (Pix2StructConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Pix2StructVision Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
flattened_patches: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
flattened_patches (torch.FloatTensor of shape (batch_size, sequence_length, num_channels x patch_height x patch_width)) —
Flattened and padded pixel values. These values can be obtained using AutoImageProcessor. See
Pix2StructVisionImageProcessor.__call__ for details. Check the original
paper (figure 5) for more details.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]:
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Pix2StructConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The Pix2StructVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import requests
from PIL import Image
from transformers import AutoProcessor, Pix2StructVisionModel
image_processor = AutoProcessor.from_pretrained("google/pix2struct-textcaps-base")
model = Pix2StructVisionModel.from_pretrained("google/pix2struct-textcaps-base")
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 2048, 768]
Pix2StructForConditionalGeneration
class transformers.Pix2StructForConditionalGeneration
<
source
>
(
config: Pix2StructConfig
)
Parameters
config (Union[Pix2StructConfig, Pix2StructTextConfig]) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
A conditional generation model with a language modeling head. Can be used for sequence generation tasks.
The Pix2Struct model was proposed in Pix2Struct: Screenshot Parsing as Pretraining for Visual Language
Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu,
Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. It’s an encoder decoder
transformer pre-trained in a image-to-text setting.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
flattened_patches: typing.Optional[torch.FloatTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
decoder_input_ids: typing.Optional[torch.LongTensor] = None
decoder_attention_mask: typing.Optional[torch.BoolTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
decoder_head_mask: typing.Optional[torch.FloatTensor] = None
cross_attn_head_mask: typing.Optional[torch.Tensor] = None
encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
labels: typing.Optional[torch.LongTensor] = None
decoder_inputs_embeds: typing.Optional[torch.Tensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
flattened_patches (torch.FloatTensor of shape (batch_size, seq_length, hidden_size)) —
Flattened pixel patches. the hidden_size is obtained by the following formula: hidden_size =
num_channels patch_size patch_size
The process of flattening the pixel patches is done by Pix2StructProcessor.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are decoder input IDs?
Pix2StructText uses the pad_token_id as the starting token for decoder_input_ids generation. If
past_key_values is used, optionally only the last decoder_input_ids have to be input (see
past_key_values).
To know more on how to prepare decoder_input_ids for pretraining take a look at Pix2StructText
Training.
decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also
be used by default.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in
[0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
encoder_outputs (tuple(tuple(torch.FloatTensor), optional) —
Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions)
last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at
the output of the last layer of the encoder. Used in the cross-attention of the decoder.
past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) —
Contains precomputed key and value hidden states of the attention layers. Can be used to speed up decoding.
If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that
don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all
decoder_input_ids of shape (batch_size, sequence_length).
decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) —
Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded
representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be
input (see past_key_values). This is useful if you want more control over how to convert
decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value
of inputs_embeds.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss for the decoder.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (Pix2StructConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the
self-attention heads.
The Pix2StructForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Inference:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
processor = AutoProcessor.from_pretrained("google/pix2struct-textcaps-base")
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base")
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
# autoregressive generation
generated_ids = model.generate(**inputs, max_new_tokens=50)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
A stop sign is on a street corner.
# conditional generation
text = "A picture of"
inputs = processor(text=text, images=image, return_tensors="pt", add_special_tokens=False)
generated_ids = model.generate(**inputs, max_new_tokens=50)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
A picture of a stop sign with a red stop sign
Training:
Copied
from PIL import Image
import requests
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
processor = AutoProcessor.from_pretrained("google/pix2struct-base")
model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base")
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "A stop sign is on the street corner."
inputs = processor(images=image, return_tensors="pt")
labels = processor(text=text, return_tensors="pt").input_ids
# forward pass
outputs = model(**inputs, labels=labels)
loss = outputs.loss
print(f"{loss.item():.5f}")
5.94282
←Perceiver
Segment Anything→
Pix2Struct
Overview
Resources
Pix2StructConfig
Pix2StructTextConfig
Pix2StructVisionConfig
Pix2StructProcessor
Pix2StructImageProcessor
Pix2StructTextModel
Pix2StructVisionModel
Pix2StructForConditionalGeneration
|
MarkupLM
Overview
The MarkupLM model was proposed in MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document
Understanding by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but
applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve
performance, similar to LayoutLM.
The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains
state-of-the-art results on 2 important benchmarks:
WebSRC, a dataset for Web-Based Structural Reading Comprehension (a bit like SQuAD but for web pages)
SWDE, a dataset
for information extraction from web pages (basically named-entity recogntion on web pages)
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document
Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a
large number of digital documents where the layout information is not fixed and needs to be interactively and
dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this
paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as
HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the
pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding
tasks. The pre-trained model and code will be publicly available.
Tips:
In addition to input_ids, forward() expects 2 additional inputs, namely xpath_tags_seq and xpath_subs_seq.
These are the XPATH tags and subscripts respectively for each token in the input sequence.
One can use MarkupLMProcessor to prepare all data for the model. Refer to the usage guide for more info.
Demo notebooks can be found here.
MarkupLM architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage: MarkupLMProcessor
The easiest way to prepare data for the model is to use MarkupLMProcessor, which internally combines a feature extractor
(MarkupLMFeatureExtractor) and a tokenizer (MarkupLMTokenizer or MarkupLMTokenizerFast). The feature extractor is
used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the
token-level inputs of the model (input_ids etc.). Note that you can still use the feature extractor and tokenizer separately,
if you only want to handle one of the two tasks.
Copied
from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor
feature_extractor = MarkupLMFeatureExtractor()
tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-base")
processor = MarkupLMProcessor(feature_extractor, tokenizer)
In short, one can provide HTML strings (and possibly additional data) to MarkupLMProcessor,
and it will create the inputs expected by the model. Internally, the processor first uses
MarkupLMFeatureExtractor to get a list of nodes and corresponding xpaths. The nodes and
xpaths are then provided to MarkupLMTokenizer or MarkupLMTokenizerFast, which converts them
to token-level input_ids, attention_mask, token_type_ids, xpath_subs_seq, xpath_tags_seq.
Optionally, one can provide node labels to the processor, which are turned into token-level labels.
MarkupLMFeatureExtractor uses Beautiful Soup, a Python library for
pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of
choice, and provide the nodes and xpaths yourself to MarkupLMTokenizer or MarkupLMTokenizerFast.
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True
This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML.
Copied
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
html_string = """
... <!DOCTYPE html>
... <html>
... <head>
... <title>Hello world</title>
... </head>
... <body>
... <h1>Welcome</h1>
... <p>Here is my website.</p>
... </body>
... </html>"""
# note that you can also add provide all tokenizer parameters here such as padding, truncation
encoding = processor(html_string, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False
In case one already has obtained all nodes and xpaths, one doesn’t need the feature extractor. In that case, one should
provide the nodes and corresponding xpaths themselves to the processor, and make sure to set parse_html to False.
Copied
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Use case 3: token classification (training), parse_html=False
For token classification tasks (such as SWDE), one can also provide the
corresponding node labels in order to train a model. The processor will then convert these into token-level labels.
By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
ignore_index of PyTorch’s CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with only_label_first_subword set to False.
Copied
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
node_labels = [1, 2, 2, 1]
encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels'])
Use case 4: web page question answering (inference), parse_html=True
For question answering tasks on web pages, you can provide a question to the processor. By default, the
processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP].
Copied
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
html_string = """
... <!DOCTYPE html>
... <html>
... <head>
... <title>Hello world</title>
... </head>
... <body>
... <h1>Welcome</h1>
... <p>My name is Niels.</p>
... </body>
... </html>"""
question = "What's his name?"
encoding = processor(html_string, questions=question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Use case 5: web page question answering (inference), parse_html=False
For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted
all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set parse_html to False.
Copied
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
question = "What's his name?"
encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
MarkupLMConfig
class transformers.MarkupLMConfig
<
source
>
(
vocab_size = 30522
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 512
type_vocab_size = 2
initializer_range = 0.02
layer_norm_eps = 1e-12
pad_token_id = 0
bos_token_id = 0
eos_token_id = 2
max_xpath_tag_unit_embeddings = 256
max_xpath_subs_unit_embeddings = 1024
tag_pad_id = 216
subs_pad_id = 1001
xpath_unit_hidden_size = 32
max_depth = 50
position_embedding_type = 'absolute'
use_cache = True
classifier_dropout = None
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30522) —
Vocabulary size of the MarkupLM model. Defines the different tokens that can be represented by the
inputs_ids passed to the forward method of MarkupLMModel.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "silu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed into MarkupLMModel.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
max_tree_id_unit_embeddings (int, optional, defaults to 1024) —
The maximum value that the tree id unit embedding might ever use. Typically set this to something large
just in case (e.g., 1024).
max_xpath_tag_unit_embeddings (int, optional, defaults to 256) —
The maximum value that the xpath tag unit embedding might ever use. Typically set this to something large
just in case (e.g., 256).
max_xpath_subs_unit_embeddings (int, optional, defaults to 1024) —
The maximum value that the xpath subscript unit embedding might ever use. Typically set this to something
large just in case (e.g., 1024).
tag_pad_id (int, optional, defaults to 216) —
The id of the padding token in the xpath tags.
subs_pad_id (int, optional, defaults to 1001) —
The id of the padding token in the xpath subscripts.
xpath_tag_unit_hidden_size (int, optional, defaults to 32) —
The hidden size of each tree id unit. One complete tree index will have
(50*xpath_tag_unit_hidden_size)-dim.
max_depth (int, optional, defaults to 50) —
The maximum depth in xpath.
This is the configuration class to store the configuration of a MarkupLMModel. It is used to instantiate a
MarkupLM model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the MarkupLM
microsoft/markuplm-base architecture.
Configuration objects inherit from BertConfig and can be used to control the model outputs. Read the
documentation from BertConfig for more information.
Examples:
Copied
from transformers import MarkupLMModel, MarkupLMConfig
# Initializing a MarkupLM microsoft/markuplm-base style configuration
configuration = MarkupLMConfig()
# Initializing a model from the microsoft/markuplm-base style configuration
model = MarkupLMModel(configuration)
# Accessing the model configuration
configuration = model.config
MarkupLMFeatureExtractor
class transformers.MarkupLMFeatureExtractor
<
source
>
(
**kwargs
)
Constructs a MarkupLM feature extractor. This can be used to get a list of nodes and corresponding xpaths from HTML
strings.
This feature extractor inherits from PreTrainedFeatureExtractor() which contains most
of the main methods. Users should refer to this superclass for more information regarding those methods.
__call__
<
source
>
(
html_strings
)
→
BatchFeature
Parameters
html_strings (str, List[str]) —
The HTML string or batch of HTML strings from which to extract nodes and corresponding xpaths.
Returns
BatchFeature
A BatchFeature with the following fields:
nodes — Nodes.
xpaths — Corresponding xpaths.
Main method to prepare for the model one or several HTML strings.
Examples:
Copied
from transformers import MarkupLMFeatureExtractor
page_name_1 = "page1.html"
page_name_2 = "page2.html"
page_name_3 = "page3.html"
with open(page_name_1) as f:
... single_html_string = f.read()
feature_extractor = MarkupLMFeatureExtractor()
# single example
encoding = feature_extractor(single_html_string)
print(encoding.keys())
# dict_keys(['nodes', 'xpaths'])
# batched example
multi_html_strings = []
with open(page_name_2) as f:
... multi_html_strings.append(f.read())
with open(page_name_3) as f:
... multi_html_strings.append(f.read())
encoding = feature_extractor(multi_html_strings)
print(encoding.keys())
# dict_keys(['nodes', 'xpaths'])
MarkupLMTokenizer
class transformers.MarkupLMTokenizer
<
source
>
(
vocab_file
merges_file
tags_dict
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
max_depth = 50
max_width = 1000
pad_width = 1001
pad_token_label = -100
only_label_first_subword = True
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
Construct a MarkupLM tokenizer. Based on byte-level Byte-Pair-Encoding (BPE). MarkupLMTokenizer can be used to
turn HTML strings into to token-level input_ids, attention_mask, token_type_ids, xpath_tags_seq and
xpath_tags_seq. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods.
Users should refer to this superclass for more information regarding those methods.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
get_special_tokens_mask
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
→
List[int]
Parameters
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding —
special tokens using the tokenizer prepare_for_model method. —
token_ids_0 (List[int]):
List of IDs.
token_ids_1 (List[int], optional):
Optional second list of IDs for sequence pairs.
already_has_special_tokens (bool, optional, defaults to False):
Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
make use of token type ids, therefore a list of zeros is returned.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
MarkupLMTokenizerFast
class transformers.MarkupLMTokenizerFast
<
source
>
(
vocab_file
merges_file
tags_dict
tokenizer_file = None
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
max_depth = 50
max_width = 1000
pad_width = 1001
pad_token_label = -100
only_label_first_subword = True
trim_offsets = False
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
Construct a MarkupLM tokenizer. Based on byte-level Byte-Pair-Encoding (BPE).
MarkupLMTokenizerFast can be used to turn HTML strings into to token-level input_ids, attention_mask,
token_type_ids, xpath_tags_seq and xpath_tags_seq. This tokenizer inherits from PreTrainedTokenizer which
contains most of the main methods.
Users should refer to this superclass for more information regarding those methods.
batch_encode_plus
<
source
>
(
batch_text_or_text_pairs: typing.Union[typing.List[str], typing.List[typing.Tuple[str, str]], typing.List[typing.List[str]]]
is_pair: bool = None
xpaths: typing.Optional[typing.List[typing.List[typing.List[int]]]] = None
node_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
add_special_tokens (bool, optional, defaults to True):
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False):
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False):
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional):
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0):
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False):
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional):
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional):
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
add_special_tokens (bool, optional, defaults to True):
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False):
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False):
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional):
Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to
None, this will use the predefined model maximum length if a maximum length is required by one of the
truncation/padding parameters. If the model has no specific maximum input length (like XLNet)
truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0):
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional):
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
build_inputs_with_special_tokens
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs to which the special tokens will be added.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
create_token_type_ids_from_sequences
<
source
>
(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
)
→
List[int]
Parameters
token_ids_0 (List[int]) —
List of IDs.
token_ids_1 (List[int], optional) —
Optional second list of IDs for sequence pairs.
Returns
List[int]
List of zeros.
Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
make use of token type ids, therefore a list of zeros is returned.
encode_plus
<
source
>
(
text: typing.Union[str, typing.List[str]]
text_pair: typing.Optional[typing.List[str]] = None
xpaths: typing.Optional[typing.List[typing.List[int]]] = None
node_labels: typing.Optional[typing.List[int]] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
Parameters
text (str, List[str], List[List[str]]) —
The first sequence to be encoded. This can be a string, a list of strings or a list of list of strings.
text_pair (List[str] or List[int], optional) —
Optional second sequence to be encoded. This can be a list of strings (words of a single example) or a
list of list of strings (words of a batch of examples).
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to
None, this will use the predefined model maximum length if a maximum length is required by one of the
truncation/padding parameters. If the model has no specific maximum input length (like XLNet)
truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Tokenize and prepare for the model a sequence or a pair of sequences. .. warning:: This method is deprecated,
__call__ should be used instead.
get_xpath_seq
<
source
>
(
xpath
)
Given the xpath expression of one particular node (like “/html/body/div/li[1]/div/span[2]”), return a list of
tag IDs and corresponding subscripts, taking into account max depth.
MarkupLMProcessor
class transformers.MarkupLMProcessor
<
source
>
(
*args
**kwargs
)
Parameters
feature_extractor (MarkupLMFeatureExtractor) —
An instance of MarkupLMFeatureExtractor. The feature extractor is a required input.
tokenizer (MarkupLMTokenizer or MarkupLMTokenizerFast) —
An instance of MarkupLMTokenizer or MarkupLMTokenizerFast. The tokenizer is a required input.
parse_html (bool, optional, defaults to True) —
Whether or not to use MarkupLMFeatureExtractor to parse HTML strings into nodes and corresponding xpaths.
Constructs a MarkupLM processor which combines a MarkupLM feature extractor and a MarkupLM tokenizer into a single
processor.
MarkupLMProcessor offers all the functionalities you need to prepare data for the model.
It first uses MarkupLMFeatureExtractor to extract nodes and corresponding xpaths from one or more HTML strings.
Next, these are provided to MarkupLMTokenizer or MarkupLMTokenizerFast, which turns them into token-level
input_ids, attention_mask, token_type_ids, xpath_tags_seq and xpath_subs_seq.
__call__
<
source
>
(
html_strings = None
nodes = None
xpaths = None
node_labels = None
questions = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
**kwargs
)
This method first forwards the html_strings argument to call(). Next, it
passes the nodes and xpaths along with the additional arguments to __call__() and
returns the output.
Optionally, one can also provide a text argument which is passed along as first sequence.
Please refer to the docstring of the above two methods for more information.
MarkupLMModel
class transformers.MarkupLMModel
<
source
>
(
config
add_pooling_layer = True
)
Parameters
config (MarkupLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare MarkupLM Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
xpath_tags_seq: typing.Optional[torch.LongTensor] = None
xpath_subs_seq: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
xpath_tags_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Tag IDs for each token in the input sequence, padded up to config.max_depth.
xpath_subs_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Subscript IDs for each token in the input sequence, padded up to config.max_depth.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarkupLMConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
The MarkupLMModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, MarkupLMModel
processor = AutoProcessor.from_pretrained("microsoft/markuplm-base")
model = MarkupLMModel.from_pretrained("microsoft/markuplm-base")
html_string = "<html> <head> <title>Page Title</title> </head> </html>"
encoding = processor(html_string, return_tensors="pt")
outputs = model(**encoding)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 4, 768]
MarkupLMForSequenceClassification
class transformers.MarkupLMForSequenceClassification
<
source
>
(
config
)
Parameters
config (MarkupLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MarkupLM Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
xpath_tags_seq: typing.Optional[torch.Tensor] = None
xpath_subs_seq: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
xpath_tags_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Tag IDs for each token in the input sequence, padded up to config.max_depth.
xpath_subs_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Subscript IDs for each token in the input sequence, padded up to config.max_depth.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarkupLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MarkupLMForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModelForSequenceClassification
import torch
processor = AutoProcessor.from_pretrained("microsoft/markuplm-base")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/markuplm-base", num_labels=7)
html_string = "<html> <head> <title>Page Title</title> </head> </html>"
encoding = processor(html_string, return_tensors="pt")
with torch.no_grad():
... outputs = model(**encoding)
loss = outputs.loss
logits = outputs.logits
MarkupLMForTokenClassification
class transformers.MarkupLMForTokenClassification
<
source
>
(
config
)
Parameters
config (MarkupLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MarkupLM Model with a token_classification head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
xpath_tags_seq: typing.Optional[torch.Tensor] = None
xpath_subs_seq: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
labels: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
xpath_tags_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Tag IDs for each token in the input sequence, padded up to config.max_depth.
xpath_subs_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Subscript IDs for each token in the input sequence, padded up to config.max_depth.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarkupLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MarkupLMForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, AutoModelForTokenClassification
import torch
processor = AutoProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
model = AutoModelForTokenClassification.from_pretrained("microsoft/markuplm-base", num_labels=7)
nodes = ["hello", "world"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span"]
node_labels = [1, 2]
encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt")
with torch.no_grad():
... outputs = model(**encoding)
loss = outputs.loss
logits = outputs.logits
MarkupLMForQuestionAnswering
class transformers.MarkupLMForQuestionAnswering
<
source
>
(
config
)
Parameters
config (MarkupLMConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
MarkupLM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear
layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.Tensor] = None
xpath_tags_seq: typing.Optional[torch.Tensor] = None
xpath_subs_seq: typing.Optional[torch.Tensor] = None
attention_mask: typing.Optional[torch.Tensor] = None
token_type_ids: typing.Optional[torch.Tensor] = None
position_ids: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
inputs_embeds: typing.Optional[torch.Tensor] = None
start_positions: typing.Optional[torch.Tensor] = None
end_positions: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
xpath_tags_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Tag IDs for each token in the input sequence, padded up to config.max_depth.
xpath_subs_seq (torch.LongTensor of shape (batch_size, sequence_length, config.max_depth), optional) —
Subscript IDs for each token in the input sequence, padded up to config.max_depth.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for
tokens that are NOT MASKED, 0 for MASKED tokens.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1
indicates the head is not masked, 0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
If set to True, the attentions tensors of all attention layers are returned. See attentions under
returned tensors for more detail.
output_hidden_states (bool, optional) —
If set to True, the hidden states of all layers are returned. See hidden_states under returned tensors
for more detail.
return_dict (bool, optional) —
If set to True, the model will return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (MarkupLMConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The MarkupLMForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoProcessor, MarkupLMForQuestionAnswering
import torch
processor = AutoProcessor.from_pretrained("microsoft/markuplm-base-finetuned-websrc")
model = MarkupLMForQuestionAnswering.from_pretrained("microsoft/markuplm-base-finetuned-websrc")
html_string = "<html> <head> <title>My name is Niels</title> </head> </html>"
question = "What's his name?"
encoding = processor(html_string, questions=question, return_tensors="pt")
with torch.no_grad():
... outputs = model(**encoding)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = encoding.input_ids[0, answer_start_index : answer_end_index + 1]
processor.decode(predict_answer_tokens).strip()
'Niels'
←MarianMT
MBart and MBart-50→
MarkupLM
Overview
Usage: MarkupLMProcessor
Documentation resources
MarkupLMConfig
MarkupLMFeatureExtractor
MarkupLMTokenizer
MarkupLMTokenizerFast
MarkupLMProcessor
MarkupLMModel
MarkupLMForSequenceClassification
MarkupLMForTokenClassification
MarkupLMForQuestionAnswering
|
TAPEX
This model is in maintenance mode only, so we won’t accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The TAPEX model was proposed in TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu,
Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. TAPEX pre-trains a BART model to solve synthetic SQL queries, after
which it can be fine-tuned to answer natural language questions related to tabular data, as well as performing table fact checking.
TAPEX has been fine-tuned on several datasets:
SQA (Sequential Question Answering by Microsoft)
WTQ (Wiki Table Questions by Stanford University)
WikiSQL (by Salesforce)
TabFact (by USCB NLP Lab).
The abstract from the paper is the following:
Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is
still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we
propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically
synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL
executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that
TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements
on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy
to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs
and to achieve new state-of-the-art results on various downstream tasks.
Tips:
TAPEX is a generative (seq2seq) model. One can directly plug in the weights of TAPEX into a BART model.
TAPEX has checkpoints on the hub that are either pre-trained only, or fine-tuned on WTQ, SQA, WikiSQL and TabFact.
Sentences + tables are presented to the model as sentence + " " + linearized table. The linearized table has the following format:
col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : ....
TAPEX has its own tokenizer, that allows to prepare all data for the model easily. One can pass Pandas DataFrames and strings to the tokenizer,
and it will automatically create the input_ids and attention_mask (as shown in the usage examples below).
Usage: inference
Below, we illustrate how to use TAPEX for table question answering. As one can see, one can directly plug in the weights of TAPEX into a BART model.
We use the Auto API, which will automatically instantiate the appropriate tokenizer (TapexTokenizer) and model (BartForConditionalGeneration) for us,
based on the configuration file of the checkpoint on the hub.
Copied
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/tapex-large-finetuned-wtq")
# prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
question = "how many movies does Leonardo Di Caprio have?"
encoding = tokenizer(table, question, return_tensors="pt")
# let the model generate an answer autoregressively
outputs = model.generate(**encoding)
# decode back to text
predicted_answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(predicted_answer)
53
Note that TapexTokenizer also supports batched inference. Hence, one can provide a batch of different tables/questions, or a batch of a single table
and multiple questions, or a batch of a single query and multiple tables. Let’s illustrate this:
Copied
# prepare table + question
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
questions = [
... "how many movies does Leonardo Di Caprio have?",
... "which actor has 69 movies?",
... "what's the first name of the actor who has 87 movies?",
... ]
encoding = tokenizer(table, questions, padding=True, return_tensors="pt")
# let the model generate an answer autoregressively
outputs = model.generate(**encoding)
# decode back to text
tokenizer.batch_decode(outputs, skip_special_tokens=True)
[' 53', ' george clooney', ' brad pitt']
In case one wants to do table verification (i.e. the task of determining whether a given sentence is supported or refuted by the contents
of a table), one can instantiate a BartForSequenceClassification model. TAPEX has checkpoints on the hub fine-tuned on TabFact, an important
benchmark for table fact checking (it achieves 84% accuracy). The code example below again leverages the Auto API.
Copied
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
# prepare table + sentence
data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
sentence = "George Clooney has 30 movies"
encoding = tokenizer(table, sentence, return_tensors="pt")
# forward pass
outputs = model(**encoding)
# print prediction
predicted_class_idx = outputs.logits[0].argmax(dim=0).item()
print(model.config.id2label[predicted_class_idx])
Refused
TapexTokenizer
class transformers.TapexTokenizer
<
source
>
(
vocab_file
merges_file
do_lower_case = True
errors = 'replace'
bos_token = '<s>'
eos_token = '</s>'
sep_token = '</s>'
cls_token = '<s>'
unk_token = '<unk>'
pad_token = '<pad>'
mask_token = '<mask>'
add_prefix_space = False
max_cell_length = 15
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
do_lower_case (bool, optional, defaults to True) —
Whether or not to lowercase the input when tokenizing.
errors (str, optional, defaults to "replace") —
Paradigm to follow when decoding bytes to UTF-8. See
bytes.decode for more information.
bos_token (str, optional, defaults to "<s>") —
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the cls_token.
eos_token (str, optional, defaults to "</s>") —
The end of sequence token.
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the sep_token.
sep_token (str, optional, defaults to "</s>") —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (str, optional, defaults to "<s>") —
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (str, optional, defaults to "<pad>") —
The token used for padding, for example when batching sequences of different lengths.
mask_token (str, optional, defaults to "<mask>") —
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict.
add_prefix_space (bool, optional, defaults to False) —
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (BART tokenizer detect beginning of words by the preceding space).
max_cell_length (int, optional, defaults to 15) —
Maximum number of characters per cell when linearizing a table. If this number is exceeded, truncation
takes place.
Construct a TAPEX tokenizer. Based on byte-level Byte-Pair-Encoding (BPE).
This tokenizer can be used to flatten one or more table(s) and concatenate them with one or more related sentences
to be used by TAPEX models. The format that the TAPEX tokenizer creates is the following:
sentence col: col1 | col2 | col 3 row 1 : val1 | val2 | val3 row 2 : …
The tokenizer supports a single table + single query, a single table and multiple queries (in which case the table
will be duplicated for every query), a single query and multiple tables (in which case the query will be duplicated
for every table), and multiple tables and queries. In other words, you can provide a batch of tables + questions to
the tokenizer for instance to prepare them for the model.
Tokenization itself is based on the BPE algorithm. It is identical to the one used by BART, RoBERTa and GPT-2.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
__call__
<
source
>
(
table: typing.Union[ForwardRef('pd.DataFrame'), typing.List[ForwardRef('pd.DataFrame')]] = None
query: typing.Union[str, typing.List[str], NoneType] = None
answer: typing.Union[str, typing.List[str]] = None
add_special_tokens: bool = True
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False
truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None
max_length: typing.Optional[int] = None
stride: int = 0
pad_to_multiple_of: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
return_token_type_ids: typing.Optional[bool] = None
return_attention_mask: typing.Optional[bool] = None
return_overflowing_tokens: bool = False
return_special_tokens_mask: bool = False
return_offsets_mapping: bool = False
return_length: bool = False
verbose: bool = True
**kwargs
)
Parameters
table (pd.DataFrame, List[pd.DataFrame]) —
Table(s) containing tabular data.
query (str or List[str], optional) —
Sentence or batch of sentences related to one or more table(s) to be encoded. Note that the number of
sentences must match the number of tables.
answer (str or List[str], optional) —
Optionally, the corresponding answer to the questions as supervision.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str or TruncationStrategy, optional, defaults to False) —
Activates and controls truncation. Accepts the following values:
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to None, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_split_into_words (bool, optional, defaults to False) —
Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. Requires padding to be activated.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
add_special_tokens (bool, optional, defaults to True) —
Whether or not to encode the sequences with the special tokens relative to their model.
padding (bool, str or PaddingStrategy, optional, defaults to False) —
Activates and controls padding. Accepts the following values:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, str, TapexTruncationStrategy or TruncationStrategy, —
optional, defaults to False):
Activates and controls truncation. Accepts the following values:
'drop_rows_to_fit': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will truncate
row by row, removing rows from the table.
True or 'longest_first': Truncate to a maximum length specified with the argument max_length or
to the maximum acceptable input length for the model if that argument is not provided. This will
truncate token by token, removing a token from the longest sequence in the pair if a pair of
sequences (or a batch of pairs) is provided.
'only_first': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
'only_second': Truncate to a maximum length specified with the argument max_length or to the
maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths
greater than the model maximum admissible input size).
max_length (int, optional) —
Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to
None, this will use the predefined model maximum length if a maximum length is required by one of the
truncation/padding parameters. If the model has no specific maximum input length (like XLNet)
truncation/padding to a maximum length will be deactivated.
stride (int, optional, defaults to 0) —
If set to a number along with max_length, the overflowing tokens returned when
return_overflowing_tokens=True will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
pad_to_multiple_of (int, optional) —
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
Main method to tokenize and prepare for the model one or several table-sequence pair(s).
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
←T5v1.1
Transformer XL→
TAPEX
Overview
Usage: inference
TapexTokenizer
|
Nyströmformer
Overview
The Nyströmformer model was proposed in Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn
Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component
that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or
dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the
input sequence length has limited its application to longer sequences — a topic being actively studied in the
community. To address this limitation, we propose Nyströmformer — a model that exhibits favorable scalability as a
function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention
with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard
sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than
standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs
favorably relative to other efficient self-attention methods. Our code is available at this https URL.
This model was contributed by novice03. The original code can be found here.
Documentation resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
NystromformerConfig
class transformers.NystromformerConfig
<
source
>
(
vocab_size = 30000
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu_new'
hidden_dropout_prob = 0.1
attention_probs_dropout_prob = 0.1
max_position_embeddings = 510
type_vocab_size = 2
segment_means_seq_len = 64
num_landmarks = 64
conv_kernel_size = 65
inv_coeff_init_option = False
initializer_range = 0.02
layer_norm_eps = 1e-05
pad_token_id = 1
bos_token_id = 0
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 30000) —
Vocabulary size of the Nystromformer model. Defines the number of different tokens that can be represented
by the inputs_ids passed when calling NystromformerModel.
hidden_size (int, optional, defaults to 768) —
Dimension of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (int, optional, defaults to 2) —
The vocabulary size of the token_type_ids passed when calling NystromformerModel.
segment_means_seq_len (int, optional, defaults to 64) —
Sequence length used in segment-means.
num_landmarks (int, optional, defaults to 64) —
The number of landmark (or Nystrom) points to use in Nystrom approximation of the softmax self-attention
matrix.
conv_kernel_size (int, optional, defaults to 65) —
The kernel size of depthwise convolution used in Nystrom approximation.
inv_coeff_init_option (bool, optional, defaults to False) —
Whether or not to use exact coefficient computation for the initial values for the iterative method of
calculating the Moore-Penrose inverse of a matrix.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
This is the configuration class to store the configuration of a NystromformerModel. It is used to instantiate
an Nystromformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the Nystromformer
uw-madison/nystromformer-512 architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import NystromformerModel, NystromformerConfig
# Initializing a Nystromformer uw-madison/nystromformer-512 style configuration
configuration = NystromformerConfig()
# Initializing a model from the uw-madison/nystromformer-512 style configuration
model = NystromformerModel(configuration)
# Accessing the model configuration
configuration = model.config
NystromformerModel
class transformers.NystromformerModel
<
source
>
(
config
)
Parameters
config (NystromformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare Nyströmformer Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NystromformerConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the
weighted average in the cross-attention heads.
The NystromformerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NystromformerModel
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerModel.from_pretrained("uw-madison/nystromformer-512")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
NystromformerForMaskedLM
class transformers.NystromformerForMaskedLM
<
source
>
(
config
)
Parameters
config (NystromformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nyströmformer Model with a language modeling head on top.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the
loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NystromformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NystromformerForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NystromformerForMaskedLM
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerForMaskedLM.from_pretrained("uw-madison/nystromformer-512")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
# mask labels of non-[MASK] tokens
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
outputs = model(**inputs, labels=labels)
NystromformerForSequenceClassification
class transformers.NystromformerForSequenceClassification
<
source
>
(
config
)
Parameters
config (NystromformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nyströmformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the
pooled output) e.g. for GLUE tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NystromformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NystromformerForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, NystromformerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerForSequenceClassification.from_pretrained("uw-madison/nystromformer-512")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = NystromformerForSequenceClassification.from_pretrained("uw-madison/nystromformer-512", num_labels=num_labels)
labels = torch.tensor([1])
loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, NystromformerForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerForSequenceClassification.from_pretrained("uw-madison/nystromformer-512", problem_type="multi_label_classification")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = NystromformerForSequenceClassification.from_pretrained(
... "uw-madison/nystromformer-512", num_labels=num_labels, problem_type="multi_label_classification"
... )
labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
loss = model(**inputs, labels=labels).loss
NystromformerForMultipleChoice
class transformers.NystromformerForMultipleChoice
<
source
>
(
config
)
Parameters
config (NystromformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nyströmformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output
and a softmax) e.g. for RocStories/SWAG tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See
input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NystromformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NystromformerForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NystromformerForMultipleChoice
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerForMultipleChoice.from_pretrained("uw-madison/nystromformer-512")
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
# the linear classifier still needs to be trained
loss = outputs.loss
logits = outputs.logits
NystromformerForTokenClassification
class transformers.NystromformerForTokenClassification
<
source
>
(
config
)
Parameters
config (NystromformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nyströmformer Model with a token classification head on top (a linear layer on top of the hidden-states output)
e.g. for Named-Entity-Recognition (NER) tasks.
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NystromformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NystromformerForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NystromformerForTokenClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerForTokenClassification.from_pretrained("uw-madison/nystromformer-512")
inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
with torch.no_grad():
... logits = model(**inputs).logits
predicted_token_class_ids = logits.argmax(-1)
# Note that tokens are classified rather then input words which means that
# there might be more predicted token classes than words.
# Multiple token classes might account for the same word
predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
labels = predicted_token_class_ids
loss = model(**inputs, labels=labels).loss
NystromformerForQuestionAnswering
class transformers.NystromformerForQuestionAnswering
<
source
>
(
config
)
Parameters
config (NystromformerConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
Nyströmformer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and
PreTrainedTokenizer.call() for details.
What are input IDs?
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
start_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (torch.LongTensor of shape (batch_size,), optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (NystromformerConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The NystromformerForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, NystromformerForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("uw-madison/nystromformer-512")
model = NystromformerForQuestionAnswering.from_pretrained("uw-madison/nystromformer-512")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
answer_start_index = outputs.start_logits.argmax()
answer_end_index = outputs.end_logits.argmax()
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
# target is "nice puppet"
target_start_index = torch.tensor([14])
target_end_index = torch.tensor([15])
outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index)
loss = outputs.loss
←NLLB-MoE
Open-Llama→
Nyströmformer
Overview
Documentation resources
NystromformerConfig
NystromformerModel
NystromformerForMaskedLM
NystromformerForSequenceClassification
NystromformerForMultipleChoice
NystromformerForTokenClassification
NystromformerForQuestionAnswering
|
EnCodec
Overview
The EnCodec neural codec model was proposed in High Fidelity Neural Audio Compression by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
The abstract from the paper is the following:
We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio.
This model was contributed by Matthijs, Patrick Von Platen and Arthur Zucker.
The original code can be found here.
Here is a quick example of how to encode and decode an audio using this model:
Copied
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
model = EncodecModel.from_pretrained("facebook/encodec_24khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_24khz")
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
EncodecConfig
class transformers.EncodecConfig
<
source
>
(
target_bandwidths = [1.5, 3.0, 6.0, 12.0, 24.0]
sampling_rate = 24000
audio_channels = 1
normalize = False
chunk_length_s = None
overlap = None
hidden_size = 128
num_filters = 32
num_residual_layers = 1
upsampling_ratios = [8, 5, 4, 2]
norm_type = 'weight_norm'
kernel_size = 7
last_kernel_size = 7
residual_kernel_size = 3
dilation_growth_rate = 2
use_causal_conv = True
pad_mode = 'reflect'
compress = 2
num_lstm_layers = 2
trim_right_ratio = 1.0
codebook_size = 1024
codebook_dim = None
use_conv_shortcut = True
**kwargs
)
Parameters
target_bandwidths (List[float], optional, defaults to [1.5, 3.0, 6.0, 12.0, 24.0]) —
The range of diffent bandwiths the model can encode audio with.
sampling_rate (int, optional, defaults to 24000) —
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
audio_channels (int, optional, defaults to 1) —
Number of channels in the audio data. Either 1 for mono or 2 for stereo.
normalize (bool, optional, defaults to False) —
Whether the audio shall be normalized when passed.
chunk_length_s (float, optional) —
If defined the audio is pre-processed into chunks of lengths chunk_length_s and then encoded.
overlap (float, optional) —
Defines the overlap between each chunk. It is used to compute the chunk_stride using the following
formulae : int((1.0 - self.overlap) * self.chunk_length).
hidden_size (int, optional, defaults to 128) —
Intermediate representation dimension.
num_filters (int, optional, defaults to 32) —
Number of convolution kernels of first EncodecConv1d down sampling layer.
num_residual_layers (int, optional, defaults to 1) —
Number of residual layers.
upsampling_ratios (Sequence[int] , optional, defaults to [8, 5, 4, 2]) —
Kernel size and stride ratios. The encoder uses downsampling ratios instead of upsampling ratios, hence it
will use the ratios in the reverse order to the ones specified here that must match the decoder order.
norm_type (str, optional, defaults to "weight_norm") —
Normalization method. Should be in ["weight_norm", "time_group_norm"]
kernel_size (int, optional, defaults to 7) —
Kernel size for the initial convolution.
last_kernel_size (int, optional, defaults to 7) —
Kernel size for the last convolution layer.
residual_kernel_size (int, optional, defaults to 3) —
Kernel size for the residual layers.
dilation_growth_rate (int, optional, defaults to 2) —
How much to increase the dilation with each layer.
use_causal_conv (bool, optional, defaults to True) —
Whether to use fully causal convolution.
pad_mode (str, optional, defaults to "reflect") —
Padding mode for the convolutions.
compress (int, optional, defaults to 2) —
Reduced dimensionality in residual branches (from Demucs v3).
num_lstm_layers (int, optional, defaults to 2) —
Number of LSTM layers at the end of the encoder.
trim_right_ratio (float, optional, defaults to 1.0) —
Ratio for trimming at the right of the transposed convolution under the use_causal_conv = True setup. If
equal to 1.0, it means that all the trimming is done at the right.
codebook_size (int, optional, defaults to 1024) —
Number of discret codes that make up VQVAE.
codebook_dim (int, optional) —
Dimension of the codebook vectors. If not defined, uses hidden_size.
use_conv_shortcut (bool, optional, defaults to True) —
Whether to use a convolutional layer as the ‘skip’ connection in the EncodecResnetBlock block. If False,
an identity function will be used, giving a generic residual connection.
This is the configuration class to store the configuration of an EncodecModel. It is used to instantiate a
Encodec model according to the specified arguments, defining the model architecture. Instantiating a configuration
with the defaults will yield a similar configuration to that of the
facebook/encodec_24khz architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import EncodecModel, EncodecConfig
# Initializing a "facebook/encodec_24khz" style configuration
configuration = EncodecConfig()
# Initializing a model (with random weights) from the "facebook/encodec_24khz" style configuration
model = EncodecModel(configuration)
# Accessing the model configuration
configuration = model.config
EncodecFeatureExtractor
class transformers.EncodecFeatureExtractor
<
source
>
(
feature_size: int = 1
sampling_rate: int = 24000
padding_value: float = 0.0
chunk_length_s: float = None
overlap: float = None
**kwargs
)
Parameters
feature_size (int, optional, defaults to 1) —
The feature dimension of the extracted features. Use 1 for mono, 2 for stereo.
sampling_rate (int, optional, defaults to 24000) —
The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz).
padding_value (float, optional, defaults to 0.0) —
The value that is used to fill the padding values.
chunk_length_s (float, optional) —
If defined the audio is pre-processed into chunks of lengths chunk_length_s and then encoded.
overlap (float, optional) —
Defines the overlap between each chunk. It is used to compute the chunk_stride using the following
formulae : int((1.0 - self.overlap) * self.chunk_length).
Constructs an EnCodec feature extractor.
This feature extractor inherits from SequenceFeatureExtractor which contains
most of the main methods. Users should refer to this superclass for more information regarding those methods.
Instantiating a feature extractor with the defaults will yield a similar configuration to that of the
facebook/encodec_24khz architecture.
__call__
<
source
>
(
raw_audio: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]]
padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy, NoneType] = None
truncation: typing.Optional[bool] = False
max_length: typing.Optional[int] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
sampling_rate: typing.Optional[int] = None
)
Parameters
raw_audio (np.ndarray, List[float], List[np.ndarray], List[List[float]]) —
The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float
values, a list of numpy arrays or a list of list of float values. The numpy array must be of shape
(num_samples,) for mono audio (feature_size = 1), or (2, num_samples) for stereo audio
(feature_size = 2).
padding (bool, str or PaddingStrategy, optional, defaults to True) —
Select a strategy to pad the returned sequences (according to the model’s padding side and padding
index) among:
True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
'max_length': Pad to a maximum length specified with the argument max_length or to the maximum
acceptable input length for the model if that argument is not provided.
False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different
lengths).
truncation (bool, optional, defaults to False) —
Activates truncation to cut input sequences longer than max_length to max_length.
max_length (int, optional) —
Maximum length of the returned list and optionally padding length (see above).
return_tensors (str or TensorType, optional) —
If set, will return tensors instead of list of python integers. Acceptable values are:
'tf': Return TensorFlow tf.constant objects.
'pt': Return PyTorch torch.Tensor objects.
'np': Return Numpy np.ndarray objects.
sampling_rate (int, optional) —
The sampling rate at which the audio input was sampled. It is strongly recommended to pass
sampling_rate at the forward call to prevent silent errors.
Main method to featurize and prepare for the model one or several sequence(s).
EncodecModel
class transformers.EncodecModel
<
source
>
(
config: EncodecConfig
)
Parameters
config (EncodecConfig) —
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
from_pretrained() method to load the model weights.
The EnCodec neural audio codec model.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
decode
<
source
>
(
audio_codes: Tensor
audio_scales: Tensor
padding_mask: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
Parameters
audio_codes (torch.FloatTensor of shape (batch_size, nb_chunks, chunk_length), optional) —
Discret code embeddings computed using model.encode.
audio_scales (torch.Tensor of shape (batch_size, nb_chunks), optional) —
Scaling factor for each audio_codes input.
padding_mask (torch.Tensor of shape (batch_size, channels, sequence_length)) —
Padding mask used to pad the input_values.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Decodes the given frames into an output audio waveform.
Note that the output might be a bit bigger than the input. In that case, any extra steps at the end can be
trimmed.
encode
<
source
>
(
input_values: Tensor
padding_mask: Tensor = None
bandwidth: typing.Optional[float] = None
return_dict: typing.Optional[bool] = None
)
Parameters
input_values (torch.Tensor of shape (batch_size, channels, sequence_length)) —
Float values of the input audio waveform.
padding_mask (torch.Tensor of shape (batch_size, channels, sequence_length)) —
Padding mask used to pad the input_values.
bandwidth (float, optional) —
The target bandwidth. Must be one of config.target_bandwidths. If None, uses the smallest possible
bandwidth. bandwidth is represented as a thousandth of what it is, e.g. 6kbps bandwidth is represented
as bandwidth == 6.0
Encodes the input audio waveform into discrete codes.
forward
<
source
>
(
input_values: Tensor
padding_mask: typing.Optional[torch.Tensor] = None
bandwidth: typing.Optional[float] = None
audio_codes: typing.Optional[torch.Tensor] = None
audio_scales: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.encodec.modeling_encodec.EncodecOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, channels, sequence_length), optional) —
Raw audio input converted to Float and padded to the approriate length in order to be encoded using chunks
of length self.chunk_length and a stride of config.chunk_stride.
padding_mask (torch.BoolTensor of shape (batch_size, channels, sequence_length), optional) —
Mask to avoid computing scaling factors on padding token indices (can we avoid computing conv on these+).
Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
padding_mask should always be passed, unless the input was truncated or not padded. This is because in
order to process tensors effectively, the input audio should be padded so that input_length % stride = step with step = chunk_length-stride. This ensures that all chunks are of the same shape
bandwidth (float, optional) —
The target bandwidth. Must be one of config.target_bandwidths. If None, uses the smallest possible
bandwidth. bandwidth is represented as a thousandth of what it is, e.g. 6kbps bandwidth is represented as
bandwidth == 6.0
audio_codes (torch.FloatTensor of shape (batch_size, nb_chunks, chunk_length), optional) —
Discret code embeddings computed using model.encode.
audio_scales (torch.Tensor of shape (batch_size, nb_chunks), optional) —
Scaling factor for each audio_codes input.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.models.encodec.modeling_encodec.EncodecOutput or tuple(torch.FloatTensor)
A transformers.models.encodec.modeling_encodec.EncodecOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (EncodecConfig) and inputs.
audio_codes (torch.FloatTensor of shape (batch_size, nb_chunks, chunk_length), optional) — Discret code embeddings computed using model.encode.
audio_values (torch.FlaotTensor of shape (batch_size, sequence_length), optional)
Decoded audio values, obtained using the decoder part of Encodec.
The EncodecModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from datasets import load_dataset
from transformers import AutoProcessor, EncodecModel
dataset = load_dataset("ashraq/esc50")
audio_sample = dataset["train"]["audio"][0]["array"]
model_id = "facebook/encodec_24khz"
model = EncodecModel.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
inputs = processor(raw_audio=audio_sample, return_tensors="pt")
outputs = model(**inputs)
audio_codes = outputs.audio_codes
audio_values = outputs.audio_values
←CLAP
Hubert→
EnCodec
Overview
EncodecConfig
EncodecFeatureExtractor
EncodecModel
|
YOLOS
Overview
The YOLOS model was proposed in You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
YOLOS proposes to just leverage the plain Vision Transformer (ViT) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN.
The abstract from the paper is the following:
Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS.
Tips:
One can use YolosImageProcessor for preparing images (and optional targets) for the model. Contrary to DETR, YOLOS doesn’t require a pixel_mask to be created.
YOLOS architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS.
Object Detection
All example notebooks illustrating inference + fine-tuning YolosForObjectDetection on a custom dataset can be found here.
See also: Object detection task guide
If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
YolosConfig
class transformers.YolosConfig
<
source
>
(
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
hidden_act = 'gelu'
hidden_dropout_prob = 0.0
attention_probs_dropout_prob = 0.0
initializer_range = 0.02
layer_norm_eps = 1e-12
image_size = [512, 864]
patch_size = 16
num_channels = 3
qkv_bias = True
num_detection_tokens = 100
use_mid_position_embeddings = True
auxiliary_loss = False
class_cost = 1
bbox_cost = 5
giou_cost = 2
bbox_loss_coefficient = 5
giou_loss_coefficient = 2
eos_coefficient = 0.1
**kwargs
)
Parameters
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (str or function, optional, defaults to "gelu") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu" and "gelu_new" are supported.
hidden_dropout_prob (float, optional, defaults to 0.1) —
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-12) —
The epsilon used by the layer normalization layers.
image_size (List[int], optional, defaults to [512, 864]) —
The size (resolution) of each image.
patch_size (int, optional, defaults to 16) —
The size (resolution) of each patch.
num_channels (int, optional, defaults to 3) —
The number of input channels.
qkv_bias (bool, optional, defaults to True) —
Whether to add a bias to the queries, keys and values.
num_detection_tokens (int, optional, defaults to 100) —
The number of detection tokens.
use_mid_position_embeddings (bool, optional, defaults to True) —
Whether to use the mid-layer position encodings.
auxiliary_loss (bool, optional, defaults to False) —
Whether auxiliary decoding losses (loss at each decoder layer) are to be used.
class_cost (float, optional, defaults to 1) —
Relative weight of the classification error in the Hungarian matching cost.
bbox_cost (float, optional, defaults to 5) —
Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost.
giou_cost (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost.
bbox_loss_coefficient (float, optional, defaults to 5) —
Relative weight of the L1 bounding box loss in the object detection loss.
giou_loss_coefficient (float, optional, defaults to 2) —
Relative weight of the generalized IoU loss in the object detection loss.
eos_coefficient (float, optional, defaults to 0.1) —
Relative classification weight of the ‘no-object’ class in the object detection loss.
This is the configuration class to store the configuration of a YolosModel. It is used to instantiate a YOLOS
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the YOLOS
hustvl/yolos-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import YolosConfig, YolosModel
# Initializing a YOLOS hustvl/yolos-base style configuration
configuration = YolosConfig()
# Initializing a model (with random weights) from the hustvl/yolos-base style configuration
model = YolosModel(configuration)
# Accessing the model configuration
configuration = model.config
YolosImageProcessor
class transformers.YolosImageProcessor
<
source
>
(
format: typing.Union[str, transformers.models.yolos.image_processing_yolos.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'>
do_resize: bool = True
size: typing.Dict[str, int] = None
resample: Resampling = <Resampling.BILINEAR: 2>
do_rescale: bool = True
rescale_factor: typing.Union[int, float] = 0.00392156862745098
do_normalize: bool = True
image_mean: typing.Union[float, typing.List[float]] = None
image_std: typing.Union[float, typing.List[float]] = None
do_pad: bool = True
**kwargs
)
Parameters
format (str, optional, defaults to "coco_detection") —
Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
do_resize (bool, optional, defaults to True) —
Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be
overridden by the do_resize parameter in the preprocess method.
size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}):
Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in
the preprocess method.
resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) —
Resampling filter to use if resizing the image.
do_rescale (bool, optional, defaults to True) —
Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the
do_rescale parameter in the preprocess method.
rescale_factor (int or float, optional, defaults to 1/255) —
Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the
preprocess method.
do_normalize —
Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the
preprocess method.
image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) —
Mean values to use when normalizing the image. Can be a single value or a list of values, one for each
channel. Can be overridden by the image_mean parameter in the preprocess method.
image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) —
Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one
for each channel. Can be overridden by the image_std parameter in the preprocess method.
do_pad (bool, optional, defaults to True) —
Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be
overridden by the do_pad parameter in the preprocess method.
Constructs a Detr image processor.
preprocess
<
source
>
(
images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]]
annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None
return_segmentation_masks: bool = None
masks_path: typing.Union[str, pathlib.Path, NoneType] = None
do_resize: typing.Optional[bool] = None
size: typing.Union[typing.Dict[str, int], NoneType] = None
resample = None
do_rescale: typing.Optional[bool] = None
rescale_factor: typing.Union[int, float, NoneType] = None
do_normalize: typing.Optional[bool] = None
image_mean: typing.Union[float, typing.List[float], NoneType] = None
image_std: typing.Union[float, typing.List[float], NoneType] = None
do_pad: typing.Optional[bool] = None
format: typing.Union[str, transformers.models.yolos.image_processing_yolos.AnnotionFormat, NoneType] = None
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'>
**kwargs
)
Parameters
images (ImageInput) —
Image or batch of images to preprocess.
annotations (AnnotationType or List[AnnotationType], optional) —
List of annotations associated with the image or batch of images. If annotionation is for object
detection, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“annotations” (List[Dict]): List of annotations for an image. Each annotation should be a
dictionary. An image can have no annotations, in which case the list should be empty.
If annotionation is for segmentation, the annotations should be a dictionary with the following keys:
“image_id” (int): The image id.
“segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary.
An image can have no segments, in which case the list should be empty.
“file_name” (str): The file name of the image.
return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) —
Whether to return segmentation masks.
masks_path (str or pathlib.Path, optional) —
Path to the directory containing the segmentation masks.
do_resize (bool, optional, defaults to self.do_resize) —
Whether to resize the image.
size (Dict[str, int], optional, defaults to self.size) —
Size of the image after resizing.
resample (PILImageResampling, optional, defaults to self.resample) —
Resampling filter to use when resizing the image.
do_rescale (bool, optional, defaults to self.do_rescale) —
Whether to rescale the image.
rescale_factor (float, optional, defaults to self.rescale_factor) —
Rescale factor to use when rescaling the image.
do_normalize (bool, optional, defaults to self.do_normalize) —
Whether to normalize the image.
image_mean (float or List[float], optional, defaults to self.image_mean) —
Mean to use when normalizing the image.
image_std (float or List[float], optional, defaults to self.image_std) —
Standard deviation to use when normalizing the image.
do_pad (bool, optional, defaults to self.do_pad) —
Whether to pad the image.
format (str or AnnotionFormat, optional, defaults to self.format) —
Format of the annotations.
return_tensors (str or TensorType, optional, defaults to self.return_tensors) —
Type of tensors to return. If None, will return the list of images.
data_format (str or ChannelDimension, optional, defaults to self.data_format) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Preprocess an image or a batch of images so that it can be used by the model.
pad
<
source
>
(
images: typing.List[numpy.ndarray]
return_pixel_mask: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = None
)
Parameters
image (np.ndarray) —
Image to pad.
return_pixel_mask (bool, optional, defaults to True) —
Whether to return a pixel mask.
input_channel_dimension (ChannelDimension, optional) —
The channel dimension format of the image. If not provided, it will be inferred from the input image.
data_format (str or ChannelDimension, optional) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width
in the batch and optionally returns their corresponding pixel mask.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
)
→
List[Dict]
Parameters
outputs (YolosObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of YolosForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
YolosFeatureExtractor
class transformers.YolosFeatureExtractor
<
source
>
(
*args
**kwargs
)
__call__
<
source
>
(
images
**kwargs
)
Preprocess an image or a batch of images.
pad
<
source
>
(
images: typing.List[numpy.ndarray]
return_pixel_mask: bool = False
return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None
data_format: typing.Optional[transformers.image_utils.ChannelDimension] = None
)
Parameters
image (np.ndarray) —
Image to pad.
return_pixel_mask (bool, optional, defaults to True) —
Whether to return a pixel mask.
input_channel_dimension (ChannelDimension, optional) —
The channel dimension format of the image. If not provided, it will be inferred from the input image.
data_format (str or ChannelDimension, optional) —
The channel dimension format of the image. If not provided, it will be the same as the input image.
Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width
in the batch and optionally returns their corresponding pixel mask.
post_process_object_detection
<
source
>
(
outputs
threshold: float = 0.5
target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None
)
→
List[Dict]
Parameters
outputs (YolosObjectDetectionOutput) —
Raw outputs of the model.
threshold (float, optional) —
Score threshold to keep object detection predictions.
target_sizes (torch.Tensor or List[Tuple[int, int]], optional) —
Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size
(height, width) of each image in the batch. If unset, predictions will not be resized.
Returns
List[Dict]
A list of dictionaries, each dictionary containing the scores, labels and boxes for an image
in the batch as predicted by the model.
Converts the raw output of YolosForObjectDetection into final bounding boxes in (top_left_x, top_left_y,
bottom_right_x, bottom_right_y) format. Only supports PyTorch.
YolosModel
class transformers.YolosModel
<
source
>
(
config: YolosConfig
add_pooling_layer: bool = True
)
Parameters
config (YolosConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare YOLOS Model transformer outputting raw hidden-states without any specific head on top.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: typing.Optional[torch.Tensor] = None
head_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
YolosImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YolosConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing
through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns
the classification token after processing through a linear layer and a tanh activation function. The linear
layer weights are trained from the next sentence prediction (classification) objective during pretraining.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The YolosModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoImageProcessor, YolosModel
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
image_processor = AutoImageProcessor.from_pretrained("hustvl/yolos-small")
model = YolosModel.from_pretrained("hustvl/yolos-small")
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 3401, 384]
YolosForObjectDetection
class transformers.YolosForObjectDetection
<
source
>
(
config: YolosConfig
)
Parameters
config (YolosConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
YOLOS Model (consisting of a ViT encoder) with object detection heads on top, for tasks such as COCO detection.
This model is a PyTorch torch.nn.Module subclass. Use it
as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
pixel_values: FloatTensor
labels: typing.Optional[typing.List[typing.Dict]] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput or tuple(torch.FloatTensor)
Parameters
pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
YolosImageProcessor.call() for details.
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (List[Dict] of len (batch_size,), optional) —
Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the
following 2 keys: 'class_labels' and 'boxes' (the class labels and bounding boxes of an image in the
batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).
Returns
transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput or tuple(torch.FloatTensor)
A transformers.models.yolos.modeling_yolos.YolosObjectDetectionOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (YolosConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a
bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized
scale-invariant IoU loss.
loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.
logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.
pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These
values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding
possible padding). You can use post_process() to retrieve the unnormalized bounding
boxes.
auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True)
and labels are provided. It is a list of dictionaries containing the two above keys (logits and
pred_boxes) for each decoder layer.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of
the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in
the self-attention heads.
The YolosForObjectDetection forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Copied
from transformers import AutoImageProcessor, AutoModelForObjectDetection
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny")
model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
target_sizes = torch.tensor([image.size[::-1]])
results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[
... 0
... ]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
... box = [round(i, 2) for i in box.tolist()]
... print(
... f"Detected {model.config.id2label[label.item()]} with confidence "
... f"{round(score.item(), 3)} at location {box}"
... )
Detected remote with confidence 0.994 at location [46.96, 72.61, 181.02, 119.73]
Detected remote with confidence 0.975 at location [340.66, 79.19, 372.59, 192.65]
Detected cat with confidence 0.984 at location [12.27, 54.25, 319.42, 470.99]
Detected remote with confidence 0.922 at location [41.66, 71.96, 178.7, 120.33]
Detected cat with confidence 0.914 at location [342.34, 21.48, 638.64, 372.46]
←ViViT
Audio Spectrogram Transformer→
YOLOS
Overview
Resources
YolosConfig
YolosImageProcessor
YolosFeatureExtractor
YolosModel
YolosForObjectDetection
|
SEW-D
Overview
SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in Performance-Efficiency Trade-offs
in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim,
Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
The abstract from the paper is the following:
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition
(ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance
and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a
pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a
variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x
inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference
time, SEW reduces word error rate by 25-50% across different model sizes.
Tips:
SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using Wav2Vec2CTCTokenizer.
This model was contributed by anton-l.
Documentation resources
Audio classification task guide
Automatic speech recognition task guide
SEWDConfig
class transformers.SEWDConfig
<
source
>
(
vocab_size = 32
hidden_size = 768
num_hidden_layers = 12
num_attention_heads = 12
intermediate_size = 3072
squeeze_factor = 2
max_position_embeddings = 512
position_buckets = 256
share_att_key = True
relative_attention = True
pos_att_type = ('p2c', 'c2p')
norm_rel_ebd = 'layer_norm'
hidden_act = 'gelu_python'
hidden_dropout = 0.1
activation_dropout = 0.1
attention_dropout = 0.1
feat_proj_dropout = 0.0
final_dropout = 0.1
initializer_range = 0.02
layer_norm_eps = 1e-07
feature_layer_norm_eps = 1e-05
feat_extract_norm = 'group'
feat_extract_activation = 'gelu'
conv_dim = (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)
conv_stride = (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)
conv_kernel = (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)
conv_bias = False
num_conv_pos_embeddings = 128
num_conv_pos_embedding_groups = 16
apply_spec_augment = True
mask_time_prob = 0.05
mask_time_length = 10
mask_time_min_masks = 2
mask_feature_prob = 0.0
mask_feature_length = 10
mask_feature_min_masks = 0
ctc_loss_reduction = 'mean'
ctc_zero_infinity = False
use_weighted_layer_sum = False
classifier_proj_size = 256
pad_token_id = 0
bos_token_id = 1
eos_token_id = 2
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 32) —
Vocabulary size of the SEW-D model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling SEWD.
hidden_size (int, optional, defaults to 768) —
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (int, optional, defaults to 12) —
Number of hidden layers in the Transformer encoder.
num_attention_heads (int, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (int, optional, defaults to 3072) —
Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
squeeze_factor (int, optional, defaults to 2) —
Sequence length downsampling factor after the encoder and upsampling factor after the transformer.
max_position_embeddings (int, optional, defaults to 512) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
position_buckets (int, optional, defaults to 256) —
The maximum size of relative position embeddings.
share_att_key (bool, optional, defaults to True) —
Whether to share attention key with c2p and p2c.
relative_attention (bool, optional, defaults to True) —
Whether to use relative position encoding.
pos_att_type (Tuple[str], optional, defaults to ("p2c", "c2p")) —
The type of relative position attention, it can be a combination of ("p2c", "c2p"), e.g. ("p2c"),
("p2c", "c2p"), ("p2c", "c2p").
norm_rel_ebd (str, optional, defaults to "layer_norm") —
Whether to use layer norm in relative embedding ("layer_norm" if yes)
hidden_act (str or function, optional, defaults to "gelu_python") —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu",
"relu", "selu", "gelu_python" and "gelu_new" are supported.
hidden_dropout (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_dropout (float, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities.
final_dropout (float, optional, defaults to 0.1) —
The dropout probability for the final projection layer of SEWDForCTC.
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (float, optional, defaults to 1e-7) —
The epsilon used by the layer normalization layers in the transformer encoder.
feature_layer_norm_eps (float, optional, defaults to 1e-5) —
The epsilon used by the layer normalization after the feature encoder.
feat_extract_norm (str, optional, defaults to "group") —
The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group
normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D
convolutional layers.
feat_proj_dropout (float, optional, defaults to 0.0) —
The dropout probability for output of the feature encoder.
feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported.
conv_dim (Tuple[int] or List[int], optional, defaults to (64, 128, 128, 128, 128, 256, 256, 256, 256, 512, 512, 512, 512)) —
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
feature encoder. The length of conv_dim defines the number of 1D convolutional layers.
conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1)) —
A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.
conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 1, 3, 1, 3, 1, 3, 1, 2, 1, 2, 1)) —
A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
length of conv_kernel defines the number of convolutional layers and has to match the length of
conv_dim.
conv_bias (bool, optional, defaults to False) —
Whether the 1D convolutional layers have a bias.
num_conv_pos_embeddings (int, optional, defaults to 128) —
Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
embeddings layer.
num_conv_pos_embedding_groups (int, optional, defaults to 16) —
Number of groups of 1D convolutional positional embeddings layer.
apply_spec_augment (bool, optional, defaults to True) —
Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
Recognition.
mask_time_prob (float, optional, defaults to 0.05) —
Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If
reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`.
mask_time_length (int, optional, defaults to 10) —
Length of vector span along the time axis.
mask_time_min_masks (int, optional, defaults to 2), —
The minimum number of masks of length mask_feature_length generated along the time axis, each time step,
irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <
mask_time_min_masks”
mask_feature_prob (float, optional, defaults to 0.0) —
Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over
the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is
True`.
mask_feature_length (int, optional, defaults to 10) —
Length of vector span along the feature axis.
mask_feature_min_masks (int, optional, defaults to 0), —
The minimum number of masks of length mask_feature_length generated along the feature axis, each time
step, irrespectively of mask_feature_prob. Only relevant if
”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks”
diversity_loss_weight (int, optional, defaults to 0.1) —
The weight of the codebook diversity loss component.
ctc_loss_reduction (str, optional, defaults to "sum") —
Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an
instance of SEWDForCTC.
ctc_zero_infinity (bool, optional, defaults to False) —
Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly
occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
of SEWDForCTC.
use_weighted_layer_sum (bool, optional, defaults to False) —
Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
instance of Wav2Vec2ForSequenceClassification.
classifier_proj_size (int, optional, defaults to 256) —
Dimensionality of the projection before token mean-pooling for classification.
This is the configuration class to store the configuration of a SEWDModel. It is used to instantiate a SEW-D
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the SEW-D
asapp/sew-d-tiny-100k architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Example:
Copied
from transformers import SEWDConfig, SEWDModel
# Initializing a SEW-D asapp/sew-d-tiny-100k style configuration
configuration = SEWDConfig()
# Initializing a model (with random weights) from the asapp/sew-d-tiny-100k style configuration
model = SEWDModel(configuration)
# Accessing the model configuration
configuration = model.config
SEWDModel
class transformers.SEWDModel
<
source
>
(
config: SEWDConfig
)
Parameters
config (SEWDConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare SEW-D Model transformer outputting raw hidden-states without any specific head on top.
SEW-D was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
mask_time_indices: typing.Optional[torch.FloatTensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SEWDConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SEWDModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, SEWDModel
import torch
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDModel.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 292, 384]
SEWDForCTC
class transformers.SEWDForCTC
<
source
>
(
config
target_lang: typing.Optional[str] = None
)
Parameters
config (SEWDConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SEW-D Model with a language modeling head on top for Connectionist Temporal Classification (CTC).
SEW-D was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, target_length), optional) —
Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to
the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1].
All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SEWDConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SEWDForCTC forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoProcessor, SEWDForCTC
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
model = SEWDForCTC.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
# audio file is decoded on the fly
inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe speech
transcription = processor.batch_decode(predicted_ids)
transcription[0]
'MISTER QUILTER IS THE APOSTIL OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL'
inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids
# compute loss
loss = model(**inputs).loss
round(loss.item(), 2)
0.21
SEWDForSequenceClassification
class transformers.SEWDForSequenceClassification
<
source
>
(
config
)
Parameters
config (SEWDConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
SEWD Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB
Keyword Spotting.
SEW-D was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech
Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger,
Yoav Artzi.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving etc.).
This model is a PyTorch torch.nn.Module sub-class. Use
it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and
behavior.
forward
<
source
>
(
input_values: typing.Optional[torch.Tensor]
attention_mask: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_values (torch.FloatTensor of shape (batch_size, sequence_length)) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and
conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details.
attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (SEWDConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The SEWDForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoFeatureExtractor, SEWDForSequenceClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
feature_extractor = AutoFeatureExtractor.from_pretrained("anton-l/sew-d-mid-400k-ft-keyword-spotting")
model = SEWDForSequenceClassification.from_pretrained("anton-l/sew-d-mid-400k-ft-keyword-spotting")
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_ids = torch.argmax(logits, dim=-1).item()
predicted_label = model.config.id2label[predicted_class_ids]
predicted_label
'_unknown_'
# compute loss - target_label is e.g. "down"
target_label = model.config.id2label[0]
inputs["labels"] = torch.tensor([model.config.label2id[target_label]])
loss = model(**inputs).loss
round(loss.item(), 2)
3.16
←SEW
Speech2Text→
SEW-D
Overview
Documentation resources
SEWDConfig
SEWDModel
SEWDForCTC
SEWDForSequenceClassification
|
CTRL
Overview
CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong and
Richard Socher. It’s a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus
of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.).
The abstract from the paper is the following:
Large-scale language models show promising text generation capabilities, but users cannot easily control particular
aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model,
trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were
derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while
providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the
training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data
via model-based source attribution.
Tips:
CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences
or links to generate coherent text. Refer to the original implementation for
more information.
CTRL is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than
the left.
CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be
observed in the run_generation.py example script.
The PyTorch models can take the past_key_values as input, which is the previously computed key/value attention pairs.
TensorFlow models accepts past as input. Using the past_key_values value prevents the model from re-computing
pre-computed values in the context of text generation. See the forward
method for more information on the usage of this argument.
This model was contributed by keskarnitishr. The original code can be found
here.
Documentation resources
Text classification task guide
Causal language modeling task guide
CTRLConfig
class transformers.CTRLConfig
<
source
>
(
vocab_size = 246534
n_positions = 256
n_embd = 1280
dff = 8192
n_layer = 48
n_head = 16
resid_pdrop = 0.1
embd_pdrop = 0.1
layer_norm_epsilon = 1e-06
initializer_range = 0.02
use_cache = True
**kwargs
)
Parameters
vocab_size (int, optional, defaults to 246534) —
Vocabulary size of the CTRL model. Defines the number of different tokens that can be represented by the
inputs_ids passed when calling CTRLModel or TFCTRLModel.
n_positions (int, optional, defaults to 256) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
n_embd (int, optional, defaults to 1280) —
Dimensionality of the embeddings and hidden states.
dff (int, optional, defaults to 8192) —
Dimensionality of the inner dimension of the feed forward networks (FFN).
n_layer (int, optional, defaults to 48) —
Number of hidden layers in the Transformer encoder.
n_head (int, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer encoder.
resid_pdrop (float, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
embd_pdrop (int, optional, defaults to 0.1) —
The dropout ratio for the embeddings.
layer_norm_epsilon (float, optional, defaults to 1e-6) —
The epsilon to use in the layer normalization layers
initializer_range (float, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
use_cache (bool, optional, defaults to True) —
Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of a CTRLModel or a TFCTRLModel. It is used to
instantiate a CTRL model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the
ctrl architecture from SalesForce.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the
documentation from PretrainedConfig for more information.
Examples:
Copied
from transformers import CTRLConfig, CTRLModel
# Initializing a CTRL configuration
configuration = CTRLConfig()
# Initializing a model (with random weights) from the configuration
model = CTRLModel(configuration)
# Accessing the model configuration
configuration = model.config
CTRLTokenizer
class transformers.CTRLTokenizer
<
source
>
(
vocab_file
merges_file
unk_token = '<unk>'
**kwargs
)
Parameters
vocab_file (str) —
Path to the vocabulary file.
merges_file (str) —
Path to the merges file.
unk_token (str, optional, defaults to "<unk>") —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
Construct a CTRL tokenizer. Based on Byte-Pair-Encoding.
This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
save_vocabulary
<
source
>
(
save_directory: str
filename_prefix: typing.Optional[str] = None
)
CTRLModel
class transformers.CTRLModel
<
source
>
(
config
)
Parameters
config (CTRLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare CTRL Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.FloatTensor]] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CTRLConfig) and inputs.
last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if
config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CTRLModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, CTRLModel
import torch
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = CTRLModel.from_pretrained("ctrl")
# CTRL was trained with control codes as the first token
inputs = tokenizer("Opinion My dog is cute", return_tensors="pt")
assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values()
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
list(last_hidden_states.shape)
[1, 5, 1280]
CTRLLMHeadModel
class transformers.CTRLLMHeadModel
<
source
>
(
config
)
Parameters
config (CTRLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The CTRL Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.FloatTensor]] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CTRLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head))
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CTRLLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
import torch
from transformers import AutoTokenizer, CTRLLMHeadModel
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = CTRLLMHeadModel.from_pretrained("ctrl")
# CTRL was trained with control codes as the first token
inputs = tokenizer("Wikipedia The llama is", return_tensors="pt")
assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values()
sequence_ids = model.generate(inputs["input_ids"])
sequences = tokenizer.batch_decode(sequence_ids)
sequences
['Wikipedia The llama is a member of the family Bovidae. It is native to the Andes of Peru,']
outputs = model(**inputs, labels=inputs["input_ids"])
round(outputs.loss.item(), 2)
9.21
list(outputs.logits.shape)
[1, 5, 246534]
CTRLForSequenceClassification
class transformers.CTRLForSequenceClassification
<
source
>
(
config
)
Parameters
config (CTRLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The CTRL Model transformer with a sequence classification head on top (linear layer).
CTRLForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last
token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in
each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot
guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last
value in each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch torch.nn.Module subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
forward
<
source
>
(
input_ids: typing.Optional[torch.LongTensor] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None
attention_mask: typing.Optional[torch.FloatTensor] = None
token_type_ids: typing.Optional[torch.LongTensor] = None
position_ids: typing.Optional[torch.LongTensor] = None
head_mask: typing.Optional[torch.FloatTensor] = None
inputs_embeds: typing.Optional[torch.FloatTensor] = None
labels: typing.Optional[torch.LongTensor] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
)
→
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
input_ids (torch.LongTensor of shape (batch_size, sequence_length)) —
input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2]
(sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary.
If past_key_values is used, only input IDs that do not have their past calculated should be passed as
input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past_key_values (Tuple[Tuple[torch.FloatTensor]] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have
their past given to this model should not be passed as input ids as they have already been computed.
attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past_key_values key value states are returned and can be used to speed up decoding (see
past_key_values).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple.
labels (torch.LongTensor of shape (batch_size,), optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (CTRLConfig) and inputs.
loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The CTRLForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
Copied
import torch
from transformers import AutoTokenizer, CTRLForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = CTRLForSequenceClassification.from_pretrained("ctrl")
# CTRL was trained with control codes as the first token
inputs = tokenizer("Opinion My dog is cute", return_tensors="pt")
assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values()
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'LABEL_0'
Copied
import torch
torch.manual_seed(42)
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = CTRLForSequenceClassification.from_pretrained("ctrl", num_labels=num_labels)
labels = torch.tensor(1)
loss = model(**inputs, labels=labels).loss
round(loss.item(), 2)
0.35
Example of multi-label classification:
Copied
import torch
from transformers import AutoTokenizer, CTRLForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = CTRLForSequenceClassification.from_pretrained("ctrl", problem_type="multi_label_classification")
# CTRL was trained with control codes as the first token
inputs = tokenizer("Opinion My dog is cute", return_tensors="pt")
assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values()
with torch.no_grad():
... logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
'LABEL_0'
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = CTRLForSequenceClassification.from_pretrained("ctrl", num_labels=num_labels)
num_labels = len(model.config.id2label)
labels = torch.nn.functional.one_hot(torch.tensor([predicted_class_id]), num_classes=num_labels).to(
... torch.float
... )
loss = model(**inputs, labels=labels).loss
loss.backward()
TFCTRLModel
class transformers.TFCTRLModel
<
source
>
(
*args
**kwargs
)
Parameters
config (CTRLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The bare CTRL Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states).
Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past key value states are returned and can be used to speed up decoding (see past).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
Returns
transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CTRLConfig) and inputs.
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output.
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCTRLModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCTRLModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = TFCTRLModel.from_pretrained("ctrl")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
last_hidden_states = outputs.last_hidden_state
TFCTRLLMHeadModel
class transformers.TFCTRLLMHeadModel
<
source
>
(
*args
**kwargs
)
Parameters
config (CTRLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The CTRL Model transformer with a language modeling head on top (linear layer with weights tied to the input
embeddings).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states).
Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past key value states are returned and can be used to speed up decoding (see past).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CTRLConfig) and inputs.
loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction).
logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values input) to speed up sequential decoding.
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCTRLLMHeadModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCTRLLMHeadModel
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = TFCTRLLMHeadModel.from_pretrained("ctrl")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
TFCTRLForSequenceClassification
class transformers.TFCTRLForSequenceClassification
<
source
>
(
*args
**kwargs
)
Parameters
config (CTRLConfig) — Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the from_pretrained() method to load the model weights.
The CTRL Model transformer with a sequence classification head on top (linear layer).
TFCTRLForSequenceClassification uses the last token in order to do the classification, as other causal models
(e.g. GPT-1, GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in
each row of the batch).
This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a tf.keras.Model subclass. Use it
as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and
behavior.
TensorFlow models and layers in transformers accept two formats as input:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional argument.
The reason the second format is supported is that Keras methods prefer this format when passing inputs to models
and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just
pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second
format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with
the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first
positional argument:
a single Tensor with input_ids only and nothing else: model(input_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
Note that when creating models and layers with
subclassing then you don’t need to worry
about any of this, as you can just pass inputs like you would to any other Python function!
call
<
source
>
(
input_ids: TFModelInputType | None = None
past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None
attention_mask: np.ndarray | tf.Tensor | None = None
token_type_ids: np.ndarray | tf.Tensor | None = None
position_ids: np.ndarray | tf.Tensor | None = None
head_mask: np.ndarray | tf.Tensor | None = None
inputs_embeds: np.ndarray | tf.Tensor | None = None
use_cache: Optional[bool] = None
output_attentions: Optional[bool] = None
output_hidden_states: Optional[bool] = None
return_dict: Optional[bool] = None
labels: np.ndarray | tf.Tensor | None = None
training: Optional[bool] = False
)
→
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
Parameters
input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) —
input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of
input past key value states).
Indices of input sequence tokens in the vocabulary.
If past is used, only input IDs that do not have their past calculated should be passed as input_ids.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and
PreTrainedTokenizer.encode() for details.
What are input IDs?
past (List[tf.Tensor] of length config.n_layers) —
Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see
past output below). Can be used to speed up sequential decoding. The token ids which have their past
given to this model should not be passed as input ids as they have already been computed.
attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
1 for tokens that are not masked,
0 for tokens that are masked.
What are attention masks?
token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
What are token type IDs?
position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) —
Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids indices into associated vectors than the
model’s internal embedding lookup matrix.
use_cache (bool, optional) —
If set to True, past key value states are returned and can be used to speed up decoding (see past).
output_attentions (bool, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions under returned
tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the
config will be used instead.
output_hidden_states (bool, optional) —
Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
return_dict (bool, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in
eager mode, in graph mode the value will always be set to True.
training (bool, optional, defaults to False) —
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
labels (tf.Tensor of shape (batch_size, sequence_length), optional) —
Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1].
Returns
transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor)
A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if
return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the
configuration (CTRLConfig) and inputs.
loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
heads.
The TFCTRLForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
Copied
from transformers import AutoTokenizer, TFCTRLForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("ctrl")
model = TFCTRLForSequenceClassification.from_pretrained("ctrl")
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
logits = model(**inputs).logits
predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0])
Copied
# To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
num_labels = len(model.config.id2label)
model = TFCTRLForSequenceClassification.from_pretrained("ctrl", num_labels=num_labels)
labels = tf.constant(1)
loss = model(**inputs, labels=labels).loss
←CPMANT
DeBERTa→
CTRL
Overview
Documentation resources
CTRLConfig
CTRLTokenizer
CTRLModel
CTRLLMHeadModel
CTRLForSequenceClassification
TFCTRLModel
TFCTRLLMHeadModel
TFCTRLForSequenceClassification
|