Transformers documentation
InstructBlipVideo
This model was released on 2023-05-11 and added to Hugging Face Transformers on 2024-06-25.
InstructBlipVideo
Overview
The InstructBLIPVideo is an extension of the models proposed in InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi. InstructBLIPVideo uses the same architecture as InstructBLIP and works with the same checkpoints as InstructBLIP. The only difference is the ability to process videos.
The abstract from the paper is the following:
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.
InstructBLIPVideo architecture. Taken from the original paper. This model was contributed by RaushanTurganbay. The original code can be found here.
Usage tips
- The model was trained by sampling 4 frames per video, so it’s recommended to sample 4 frames
BLIP models after release v4.46 will raise warnings about adding
processor.num_query_tokens = {{num_query_tokens}}and expand model embeddings layer to add special<image>token. It is strongly recommended to add the attributes to the processor if you own the model checkpoint, or open a PR if it is not owned by you. Adding these attributes means that BLIP will add the number of query tokens required per image and expand the text with as many<image>placeholders as there will be query tokens. Usually it is around 500 tokens per image, so make sure that the text is not truncated as otherwise there will be failure when merging the embeddings. The attributes can be obtained from model config, asmodel.config.num_query_tokensand model embeddings expansion can be done by following this link.
InstructBlipVideoConfig
class transformers.InstructBlipVideoConfig
< source >( vision_config = None qformer_config = None text_config = None num_query_tokens = 32 video_token_index = None **kwargs )
Parameters
- vision_config (
dict, optional) — Dictionary of configuration options used to initialize InstructBlipVideoVisionConfig. - qformer_config (
dict, optional) — Dictionary of configuration options used to initialize InstructBlipVideoQFormerConfig. - text_config (
dict, optional) — Dictionary of configuration options used to initialize any PreTrainedConfig. - num_query_tokens (
int, optional, defaults to 32) — The number of query tokens passed through the Transformer. - video_token_index (
int, optional) — Token index of special video token. - kwargs (optional) — Dictionary of keyword arguments.
InstructBlipVideoConfig is the configuration class to store the configuration of a InstructBlipVideoForConditionalGeneration. It is used to instantiate a Instructblipvideo model according to the specified arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the Instructblipvideo Salesforce/instruct-blip-flan-t5 architecture.
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
Example:
>>> from transformers import (
... InstructBlipVideoVisionConfig,
... InstructBlipVideoQFormerConfig,
... OPTConfig,
... InstructBlipVideoConfig,
... InstructBlipVideoForConditionalGeneration,
... )
>>> # Initializing a InstructBlipVideoConfig with Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipVideoConfig()
>>> # Initializing a InstructBlipVideoForConditionalGeneration (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipVideoForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
>>> # We can also initialize a InstructBlipVideoConfig from a InstructBlipVideoVisionConfig, InstructBlipVideoQFormerConfig and any PreTrainedConfig
>>> # Initializing Instructblipvideo vision, Instructblipvideo Q-Former and language model configurations
>>> vision_config = InstructBlipVideoVisionConfig()
>>> qformer_config = InstructBlipVideoQFormerConfig()
>>> text_config = OPTConfig()
>>> config = InstructBlipVideoConfig(vision_config=vision_config, qformer_config=qformer_config, text_config=text_config)InstructBlipVideoVisionConfig
class transformers.InstructBlipVideoVisionConfig
< source >( hidden_size = 1408 intermediate_size = 6144 num_hidden_layers = 39 num_attention_heads = 16 image_size = 224 patch_size = 14 hidden_act = 'gelu' layer_norm_eps = 1e-06 attention_dropout = 0.0 initializer_range = 1e-10 qkv_bias = True **kwargs )
Parameters
- hidden_size (
int, optional, defaults to 1408) — Dimensionality of the encoder layers and the pooler layer. - intermediate_size (
int, optional, defaults to 6144) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - num_hidden_layers (
int, optional, defaults to 39) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. - image_size (
int, optional, defaults to 224) — The size (resolution) of each image. - patch_size (
int, optional, defaults to 14) — The size (resolution) of each patch. - hidden_act (
strorfunction, optional, defaults to"gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu","relu","selu"and"gelu_new""gelu"are supported. to 1e-5): The epsilon used by the layer normalization layers. - layer_norm_eps (
float, optional, defaults to 1e-06) — The epsilon used by the layer normalization layers. - attention_dropout (
float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. - initializer_range (
float, optional, defaults to 1e-10) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - qkv_bias (
bool, optional, defaults toTrue) — Whether to add a bias to the queries and values in the self-attention layers.
This is the configuration class to store the configuration of a InstructBlipVideoVisionModel. It is used to instantiate a InstructBlipVideo vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the InstructBlipVideo Salesforce/instruct-blip-flan-t5 architecture.
Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
Example:
>>> from transformers import InstructBlipVideoVisionConfig, InstructBlipVideoVisionModel
>>> # Initializing a InstructBlipVideoVisionConfig with Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipVideoVisionConfig()
>>> # Initializing a InstructBlipVideoVisionModel (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipVideoVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configInstructBlipVideoQFormerConfig
class transformers.InstructBlipVideoQFormerConfig
< source >( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 cross_attention_frequency = 2 encoder_hidden_size = 1408 **kwargs )
Parameters
- vocab_size (
int, optional, defaults to 30522) — Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by theinputs_idspassed when calling the model. - hidden_size (
int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (
int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. - num_attention_heads (
int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (
int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. - hidden_act (
strorCallable, optional, defaults to"gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu","relu","silu"and"gelu_new"are supported. - hidden_dropout_prob (
float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (
float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. - max_position_embeddings (
int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). - initializer_range (
float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (
float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. - pad_token_id (
int, optional, defaults to 0) — Token id used for padding sequences. - cross_attention_frequency (
int, optional, defaults to 2) — The frequency of adding cross-attention to the Transformer layers. - encoder_hidden_size (
int, optional, defaults to 1408) — The hidden size of the hidden states for cross-attention.
This is the configuration class to store the configuration of a InstructBlipVideoQFormerModel. It is used to instantiate a InstructBlipVideo Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the InstructBlipVideo Salesforce/instruct-blip-flan-t5 architecture. Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.
Note that InstructBlipVideoQFormerModel is very similar to BertLMHeadModel with interleaved cross-attention.
Examples:
>>> from transformers import InstructBlipVideoQFormerConfig, InstructBlipVideoQFormerModel
>>> # Initializing a InstructBlipVideo Salesforce/instruct-blip-flan-t5 style configuration
>>> configuration = InstructBlipVideoQFormerConfig()
>>> # Initializing a model (with random weights) from the Salesforce/instruct-blip-flan-t5 style configuration
>>> model = InstructBlipVideoQFormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.configInstructBlipVideoProcessor
class transformers.InstructBlipVideoProcessor
< source >( video_processor tokenizer qformer_tokenizer num_query_tokens = None **kwargs )
Parameters
- video_processor (
InstructBlipVideoVideoProcessor) — An instance of InstructBlipVideoVideoProcessor. The video processor is a required input. - tokenizer (
AutoTokenizer) — An instance of [‘PreTrainedTokenizer`]. The tokenizer is a required input. - qformer_tokenizer (
AutoTokenizer) — An instance of [‘PreTrainedTokenizer`]. The Q-Former tokenizer is a required input. - num_query_tokens (
int, optional) — Number of tokens used by the Qformer as queries, should be same as in model’s config.
Constructs an InstructBLIPVideo processor which wraps a InstructBLIP image processor and a LLaMa/T5 tokenizer into a single processor.
InstructBlipVideoProcessor offers all the functionalities of InstructBlipVideoVideoProcessor and AutoTokenizer. See the
docstring of __call__() and decode() for more information.
InstructBlipVideoVideoProcessor
class transformers.InstructBlipVideoVideoProcessor
< source >( **kwargs: typing_extensions.Unpack[transformers.processing_utils.VideosKwargs] )
preprocess
< source >( videos: typing.Union[list['PIL.Image.Image'], numpy.ndarray, ForwardRef('torch.Tensor'), list[numpy.ndarray], list['torch.Tensor'], list[list['PIL.Image.Image']], list[list[numpy.ndarray]], list[list['torch.Tensor']], transformers.video_utils.URL, list[transformers.video_utils.URL], list[list[transformers.video_utils.URL]], transformers.video_utils.Path, list[transformers.video_utils.Path], list[list[transformers.video_utils.Path]]] **kwargs: typing_extensions.Unpack[transformers.processing_utils.VideosKwargs] )
Parameters
- do_resize (
bool, optional, defaults toself.do_resize) — Whether to resize the video’s (height, width) dimensions to the specifiedsize. Can be overridden by thedo_resizeparameter in thepreprocessmethod. - size (
dict, optional, defaults toself.size) — Size of the output video after resizing. Can be overridden by thesizeparameter in thepreprocessmethod. - size_divisor (
int, optional, defaults toself.size_divisor) — The size by which to make sure both the height and width can be divided. - default_to_square (
bool, optional, defaults toself.default_to_square) — Whether to default to a square video when resizing, if size is an int. - resample (
PILImageResampling, optional, defaults toself.resample) — Resampling filter to use if resizing the video. Only has an effect ifdo_resizeis set toTrue. Can be overridden by theresampleparameter in thepreprocessmethod. - do_center_crop (
bool, optional, defaults toself.do_center_crop) — Whether to center crop the video to the specifiedcrop_size. Can be overridden bydo_center_cropin thepreprocessmethod. - crop_size (
dict[str, int]optional, defaults toself.crop_size) — Size of the output video after applyingcenter_crop. Can be overridden bycrop_sizein thepreprocessmethod. - do_rescale (
bool, optional, defaults toself.do_rescale) — Whether to rescale the video by the specified scalerescale_factor. Can be overridden by thedo_rescaleparameter in thepreprocessmethod. - rescale_factor (
intorfloat, optional, defaults toself.rescale_factor) — Scale factor to use if rescaling the video. Only has an effect ifdo_rescaleis set toTrue. Can be overridden by therescale_factorparameter in thepreprocessmethod. - do_normalize (
bool, optional, defaults toself.do_normalize) — Whether to normalize the video. Can be overridden by thedo_normalizeparameter in thepreprocessmethod. Can be overridden by thedo_normalizeparameter in thepreprocessmethod. - image_mean (
floatorlist[float], optional, defaults toself.image_mean) — Mean to use if normalizing the video. This is a float or list of floats the length of the number of channels in the video. Can be overridden by theimage_meanparameter in thepreprocessmethod. Can be overridden by theimage_meanparameter in thepreprocessmethod. - image_std (
floatorlist[float], optional, defaults toself.image_std) — Standard deviation to use if normalizing the video. This is a float or list of floats the length of the number of channels in the video. Can be overridden by theimage_stdparameter in thepreprocessmethod. Can be overridden by theimage_stdparameter in thepreprocessmethod. - do_convert_rgb (
bool, optional, defaults toself.image_std) — Whether to convert the video to RGB. - video_metadata (
VideoMetadata, optional) — Metadata of the video containing information about total duration, fps and total number of frames. - do_sample_frames (
int, optional, defaults toself.do_sample_frames) — Whether to sample frames from the video before processing or to process the whole video. - num_frames (
int, optional, defaults toself.num_frames) — Maximum number of frames to sample whendo_sample_frames=True. - fps (
intorfloat, optional, defaults toself.fps) — Target frames to sample per second whendo_sample_frames=True. - return_tensors (
strorTensorType, optional) — Returns stacked tensors if set to `pt, otherwise returns a list of tensors. - data_format (
ChannelDimensionorstr, optional, defaults toChannelDimension.FIRST) — The channel dimension format for the output video. Can be one of:"channels_first"orChannelDimension.FIRST: video in (num_channels, height, width) format."channels_last"orChannelDimension.LAST: video in (height, width, num_channels) format.- Unset: Use the channel dimension format of the input video.
- input_data_format (
ChannelDimensionorstr, optional) — The channel dimension format for the input video. If unset, the channel dimension format is inferred from the input video. Can be one of:"channels_first"orChannelDimension.FIRST: video in (num_channels, height, width) format."channels_last"orChannelDimension.LAST: video in (height, width, num_channels) format."none"orChannelDimension.NONE: video in (height, width) format.
- device (
torch.device, optional) — The device to process the videos on. If unset, the device is inferred from the input videos. - return_metadata (
bool, optional) — Whether to return video metadata or not.
InstructBlipVideoVisionModel
class transformers.InstructBlipVideoVisionModel
< source >( config: InstructBlipVideoVisionConfig )
forward
< source >( pixel_values: typing.Optional[torch.FloatTensor] = None interpolate_pos_encoding: bool = False **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained usingimage_processor_class. Seeimage_processor_class.__call__for details (InstructBlipVideoProcessor usesimage_processor_classfor processing images). - interpolate_pos_encoding (
bool, defaults toFalse) — Whether to interpolate the pre-trained position encodings.
Returns
transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (InstructBlipVideoConfig) and inputs.
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The InstructBlipVideoVisionModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
InstructBlipVideoQFormerModel
class transformers.InstructBlipVideoQFormerModel
< source >( config: InstructBlipVideoQFormerConfig )
Querying Transformer (Q-Former), used in InstructBlipVideo. Slightly modified from BLIP-2 as it also takes the instruction as input.
forward
< source >( input_ids: LongTensor attention_mask: typing.Optional[torch.FloatTensor] = None position_ids: typing.Optional[torch.LongTensor] = None query_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- position_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]. - query_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Hidden states to be used in the attention computation. If cross-attention, will be used for the query (i.e., key and value will use the encoder_hidden_states). - encoder_hidden_states (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. - encoder_attention_mask (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (InstructBlipVideoConfig) and inputs.
-
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensorof shape(batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. -
hidden_states (
tuple(torch.FloatTensor), optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
cross_attentions (
tuple(torch.FloatTensor), optional, returned whenoutput_attentions=Trueandconfig.add_cross_attention=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
-
past_key_values (
Cache, optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=Truein the cross-attention blocks) that can be used (seepast_key_valuesinput) to speed up sequential decoding.
The InstructBlipVideoQFormerModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
InstructBlipVideoModel
class transformers.InstructBlipVideoModel
< source >( config: InstructBlipVideoConfig )
Parameters
- config (InstructBlipVideoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
InstructBlipVideo base Model consisting of language model, qformer and vision encoder.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: FloatTensor qformer_input_ids: FloatTensor qformer_attention_mask: typing.Optional[torch.LongTensor] = None input_ids: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None interpolate_pos_encoding: bool = False use_cache: typing.Optional[bool] = None **kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] ) → transformers.models.instructblipvideo.modeling_instructblipvideo.InstructBlipVideoForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input images. Pixel values can be obtained usingimage_processor_class. Seeimage_processor_class.__call__for details (InstructBlipVideoProcessor usesimage_processor_classfor processing images). - qformer_input_ids (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary of the Q-Former. Input tokens can optionally be provided to serve as text prompt, which the Q-Former model will encode.Indices can be obtained using InstructBlipVideoProcessor. See
InstructBlipVideoProcessor.__call__()for details. - qformer_attention_mask (
torch.Tensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- input_ids (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- decoder_input_ids (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- decoder_attention_mask (
torch.BoolTensorof shape(batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens indecoder_input_ids. Causal mask will also be used by default.Only relevant in case an encoder-decoder language model (like T5) is used.
- inputs_embeds (
torch.Tensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. - output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. - return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - interpolate_pos_encoding (
bool, defaults toFalse) — Whether to interpolate the pre-trained position encodings. - use_cache (
bool, optional) — If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values).
Returns
transformers.models.instructblipvideo.modeling_instructblipvideo.InstructBlipVideoForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
A transformers.models.instructblipvideo.modeling_instructblipvideo.InstructBlipVideoForConditionalGenerationModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (InstructBlipVideoConfig) and inputs.
- loss (
torch.FloatTensor, optional, returned whenlabelsis provided,torch.FloatTensorof shape(1,)) — Language modeling loss from the language model. - logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model. - vision_outputs (
torch.FloatTensor, optional, defaults toNone) — Outputs of the vision encoder. - qformer_outputs (
tuple[torch.FloatTensor], optional, defaults toNone) — Outputs of the Q-Former (Querying Transformer). - language_model_outputs (
CausalLMOutputWithPastorSeq2SeqLMOutput) — Outputs of the language model.
The InstructBlipVideoModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
InstructBlipVideoForConditionalGeneration
class transformers.InstructBlipVideoForConditionalGeneration
< source >( config: InstructBlipVideoConfig )
Parameters
- config (InstructBlipVideoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
InstructBlipVideo Model for generating text given an image and an optional text prompt. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model.
One can optionally pass input_ids to the model, which serve as a text prompt, to make the language model continue
the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: FloatTensor qformer_input_ids: FloatTensor qformer_attention_mask: typing.Optional[torch.LongTensor] = None input_ids: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None return_dict: typing.Optional[bool] = None interpolate_pos_encoding: bool = False use_cache: typing.Optional[bool] = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) → transformers.models.instructblipvideo.modeling_instructblipvideo.InstructBlipVideoForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.FloatTensorof shape(batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input images. Pixel values can be obtained usingimage_processor_class. Seeimage_processor_class.__call__for details (InstructBlipVideoProcessor usesimage_processor_classfor processing images). - qformer_input_ids (
torch.LongTensorof shape (batch_size, sequence_length)) — The sequence used as a prompt to be fed to the Q-Former module. - qformer_attention_mask (
torch.LongTensorof shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. - input_ids (
torch.FloatTensorof shape(batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- attention_mask (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
- decoder_input_ids (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length), optional) — Mask to avoid performing attention on certain token indices. By default, a causal mask will be used, to make sure the model can only look at previous inputs in order to predict the future. - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passinginput_idsyou can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_idsindices into associated vectors than the model’s internal embedding lookup matrix. - output_attentions (
bool, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returned tensors for more detail. - output_hidden_states (
bool, optional) — Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors for more detail. - labels (
torch.LongTensorof shape(batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size]or -100 (seeinput_idsdocstring). Tokens with indices set to-100are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]. - return_dict (
bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - interpolate_pos_encoding (
bool, defaults toFalse) — Whether to interpolate the pre-trained position encodings. - use_cache (
bool, optional) — If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values).
Returns
transformers.models.instructblipvideo.modeling_instructblipvideo.InstructBlipVideoForConditionalGenerationModelOutput or tuple(torch.FloatTensor)
A transformers.models.instructblipvideo.modeling_instructblipvideo.InstructBlipVideoForConditionalGenerationModelOutput or a tuple of
torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various
elements depending on the configuration (InstructBlipVideoConfig) and inputs.
- loss (
torch.FloatTensor, optional, returned whenlabelsis provided,torch.FloatTensorof shape(1,)) — Language modeling loss from the language model. - logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model. - vision_outputs (
torch.FloatTensor, optional, defaults toNone) — Outputs of the vision encoder. - qformer_outputs (
tuple[torch.FloatTensor], optional, defaults toNone) — Outputs of the Q-Former (Querying Transformer). - language_model_outputs (
CausalLMOutputWithPastorSeq2SeqLMOutput) — Outputs of the language model.
The InstructBlipVideoForConditionalGeneration forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Examples:
>>> from transformers import InstructBlipVideoProcessor, InstructBlipVideoForConditionalGeneration
>>> import torch
>>> from huggingface_hub import hf_hub_download
>>> import av
>>> import numpy as np
>>> def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`list[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
>>> model = InstructBlipVideoForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b", device_map="auto")
>>> processor = InstructBlipVideoProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b")
>>> file_path = hf_hub_download(
... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
... )
>>> container = av.open(file_path)
>>> # sample uniformly 4 frames from the videWhy is this video funny?o
>>> total_frames = container.streams.video[0].frames
>>> indices = np.arange(0, total_frames, total_frames / 4).astype(int)
>>> clip = read_video_pyav(container, indices)
>>> prompt = "What is happening in the video?"
>>> inputs = processor(text=prompt, images=clip, return_tensors="pt").to(model.device)
>>> outputs = model.generate(
... **inputs,
... do_sample=False,
... num_beams=5,
... max_length=256,
... repetition_penalty=1.5,
... length_penalty=1.0,
... )
>>> generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip()
>>> print(generated_text)
"A person is eating a bowl of pasta, and they are using a fork to eat it. The person is sitting at a table, and the plate of pasta is on the table in front"generate
< source >( pixel_values: FloatTensor qformer_input_ids: typing.Optional[torch.LongTensor] = None qformer_attention_mask: typing.Optional[torch.LongTensor] = None input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None interpolate_pos_encoding: bool = False **generate_kwargs ) → captions (list)
Parameters
- pixel_values (
torch.FloatTensorof shape (batch_size, num_channels, height, width) or — (batch_size, num_frames, num_channels, height, width)): Input images or videos to be processed. - qformer_input_ids (
torch.LongTensorof shape (batch_size, sequence_length), optional) — The sequence used as a prompt to be fed to the Q-Former module. - qformer_attention_mask (
torch.LongTensorof shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. - input_ids (
torch.LongTensorof shape (batch_size, sequence_length), optional) — The sequence used as a prompt for the generation. - attention_mask (
torch.LongTensorof shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. - inputs_embeds (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Embedded representation of the inputs. Should be float, not int tokens. - interpolate_pos_encoding (
bool, optional, defaults toFalse) — Whether to interpolate the positional encoding of the image embeddings.
Returns
captions (list)
A list of strings of length batch_size * num_captions.
Overrides generate function to be able to use the model as a conditional generator.