Dataset Viewer
repo_owner
stringclasses 1
value | repo_name
stringclasses 1
value | tag_name
stringlengths 7
32
| name
stringlengths 15
112
| published_at
stringdate 2025-04-22 09:42:25
2025-06-26 16:02:53
| body
stringlengths 283
58.3k
| last_updated
stringdate 2025-05-09 16:54:50
2025-06-27 00:24:55
|
---|---|---|---|---|---|---|
huggingface | transformers | v4.51.3-SAM-HQ-preview | SAM-HQ (based on v4.51.3) | 2025-05-08T13:04:07+00:00 | A new model is added to transformers: SAM-HQ
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-SAM-HQ-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the SAM-HQ model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## SAM-HQ
SAM-HQ (High-Quality Segment Anything Model) was proposed in [Segment Anything in High Quality](https://arxiv.org/pdf/2306.01567.pdf) by Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu.
The model is an enhancement to the original SAM model that produces significantly higher quality segmentation masks while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability.

SAM-HQ introduces several key improvements over the original SAM model:
1. High-Quality Output Token: A learnable token injected into SAM's mask decoder for higher quality mask prediction
2. Global-local Feature Fusion: Combines features from different stages of the model for improved mask details
3. Training Data: Uses a carefully curated dataset of 44K high-quality masks instead of SA-1B
4. Efficiency: Adds only 0.5% additional parameters while significantly improving mask quality
5. Zero-shot Capability: Maintains SAM's strong zero-shot performance while improving accuracy
The abstract from the paper is the following:
*The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced dataset of 44k masks, which takes only 4 hours on 8 GPUs.*
Tips:
- SAM-HQ produces higher quality masks than the original SAM model, particularly for objects with intricate structures and fine details
- The model predicts binary masks with more accurate boundaries and better handling of thin structures
- Like SAM, the model performs better with input 2D points and/or input bounding boxes
- You can prompt multiple points for the same image and predict a single high-quality mask
- The model maintains SAM's zero-shot generalization capabilities
- SAM-HQ only adds ~0.5% additional parameters compared to SAM
- Fine-tuning the model is not supported yet
## Usage example
SAM-HQ can be found on the [Huggingface Hub](https://huggingface.co/models?other=sam_hq).
```python
import torch
from PIL import Image
import requests
from transformers import SamHQModel, SamHQProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamHQModel.from_pretrained("sushmanth/sam_hq_vit_b").to(device)
processor = SamHQProcessor.from_pretrained("sushmanth/sam_hq_vit_b")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
```
You can also process your own masks alongside the input images in the processor to be passed to the model:
```python
import torch
from PIL import Image
import requests
from transformers import SamHQModel, SamHQProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamHQModel.from_pretrained("sushmanth/sam_hq_vit_b").to(device)
processor = SamHQProcessor.from_pretrained("sushmanth/sam_hq_vit_b")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
mask_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
segmentation_map = Image.open(requests.get(mask_url, stream=True).raw).convert("1")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_points=input_points, segmentation_maps=segmentation_map, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
``` | 2025-05-09T16:54:50.171918 |
huggingface | transformers | v4.51.3-GraniteMoeHybrid-preview | GraniteMoeHybrid (based on v4.51.3) | 2025-05-08T13:10:59+00:00 | A new model is added to transformers: GraniteMoeHybrid
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-GraniteMoeHybrid-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the GraniteMoeHybrid model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## GraniteMoeHybrid

The `GraniteMoeHybrid` model builds on top of `GraniteMoeSharedModel` and `Bamba`. Its decoding layers consist of state space layers or MoE attention layers with shared experts. By default, the attention layers do not use positional encoding.
## Usage example
GraniteMoeHybrid can be found on the [Huggingface Hub](https://huggingface.co/models?other=granitemoehybrid).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "ibm-granite/granite-4.0-tiny-preview"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
model.eval()
# change input text as desired
prompt = "Write a code to find the maximum value in a list of numbers."
# tokenize the text
input_tokens = tokenizer(prompt, return_tensors="pt")
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
``` | 2025-05-09T16:54:50.171941 |
huggingface | transformers | v4.51.3-D-FINE-preview | D-FINE (based on v4.51.3) | 2025-05-08T13:06:40+00:00 |
A new model is added to transformers: D-FINE
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-D-FINE-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the D-FINE model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## D-FINE
<img width="1051" alt="image" src="https://github.com/user-attachments/assets/3274da06-ff44-4bb4-bebf-8bc5f9b72aac" />
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
The abstract from the paper is the following:
*We introduce D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).
FDR transforms the regression process from predicting fixed coordinates to iteratively refining probability distributions, providing a fine-grained intermediate representation that significantly enhances localization accuracy. GO-LSD is a bidirectional optimization strategy that transfers localization knowledge from refined distributions to shallower layers through self-distillation, while also simplifying the residual prediction tasks for deeper layers. Additionally, D-FINE incorporates lightweight optimizations in computationally intensive modules and operations, achieving a better balance between speed and accuracy. Specifically, D-FINE-L / X achieves 54.0% / 55.8% AP on the COCO dataset at 124 / 78 FPS on an NVIDIA T4 GPU. When pretrained on Objects365, D-FINE-L / X attains 57.1% / 59.3% AP, surpassing all existing real-time detectors. Furthermore, our method significantly enhances the performance of a wide range of DETR models by up to 5.3% AP with negligible extra parameters and training costs. Our code and pretrained models: this https URL.*
## Usage example
D-FINE can be found on the [Huggingface Hub](https://huggingface.co/models?other=d_fine).
```python
>>> import torch
>>> from transformers.image_utils import load_image
>>> from transformers import DFineForObjectDetection, AutoImageProcessor
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
>>> image = load_image(url)
>>> image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine_x_coco")
>>> model = DFineForObjectDetection.from_pretrained("ustc-community/dfine_x_coco")
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> results = image_processor.post_process_object_detection(outputs, target_sizes=[(image.height, image.width)], threshold=0.5)
>>> for result in results:
... for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
... score, label = score.item(), label_id.item()
... box = [round(i, 2) for i in box.tolist()]
... print(f"{model.config.id2label[label]}: {score:.2f} {box}")
cat: 0.96 [344.49, 23.4, 639.84, 374.27]
cat: 0.96 [11.71, 53.52, 316.64, 472.33]
remote: 0.95 [40.46, 73.7, 175.62, 117.57]
sofa: 0.92 [0.59, 1.88, 640.25, 474.74]
remote: 0.89 [333.48, 77.04, 370.77, 187.3]
``` | 2025-05-09T16:54:50.171950 |
huggingface | transformers | v4.51.3-CSM-preview | CSM (based on v4.51.3) | 2025-05-08T13:15:22+00:00 | A new model is added to transformers: CSM
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-CSM-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the CSM model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## CSM
The Conversational Speech Model (CSM) is the first open-source contextual text-to-speech model [released by Sesame](https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice). It is designed to generate natural-sounding speech with or without conversational context. This context typically consists of multi-turn dialogue between speakers, represented as sequences of text and corresponding spoken audio.
**Model Architecture:**
CSM is composed of two LLaMA-style auto-regressive transformer decoders: a backbone decoder that predicts the first codebook token and a depth decoder that generates the remaining tokens. It uses the pretrained codec model [Mimi](./mimi.md), introduced by Kyutai, to encode speech into discrete codebook tokens and decode them back into audio.
The original csm-1b checkpoint is available under the [Sesame](https://huggingface.co/sesame/csm-1b) organization on Hugging Face.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/eustlb/documentation-images/resolve/main/csm_architecture.png"/>
</div>
## Usage example
CSM can be found on the [Huggingface Hub](https://huggingface.co/models?other=csm).
### Without Conversational Context
CSM can be used to simply generate speech from a text prompt:
```python
import torch
from transformers import CsmForConditionalGeneration, AutoProcessor
model_id = "eustlb/csm-1b"
device = "cuda" if torch.cuda.is_available() else "cpu"
# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
# prepare the inputs
text = "[0]The past is just a story we tell ourselves." # `[0]` for speaker id 0
inputs = processor(text, add_special_tokens=True).to(device)
# another equivalent way to prepare the inputs
conversation = [
{"role": "0", "content": [{"type": "text", "text": "The past is just a story we tell ourselves."}]},
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
# infer the model
audio = model.generate(**inputs, output_audio=True)
processor.save_audio(audio, "example_without_context.wav")
```
### With Conversational Context
CSM can be used to generate speech given a conversation, allowing consistency in the voices and content-aware generation:
```python
import torch
from transformers import CsmForConditionalGeneration, AutoProcessor
from datasets import load_dataset, Audio
model_id = "eustlb/csm-1b"
device = "cuda" if torch.cuda.is_available() else "cpu"
# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
# prepare the inputs
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
# ensure the audio is 24kHz
ds = ds.cast_column("audio", Audio(sampling_rate=24000))
conversation = []
# 1. context
for text, audio, speaker_id in zip(ds[:4]["text"], ds[:4]["audio"], ds[:4]["speaker_id"]):
conversation.append(
{
"role": f"{speaker_id}",
"content": [{"type": "text", "text": text}, {"type": "audio", "path": audio["array"]}],
}
)
# 2. text prompt
conversation.append({"role": f"{ds[4]['speaker_id']}", "content": [{"type": "text", "text": ds[4]["text"]}]})
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
# infer the model
audio = model.generate(**inputs, output_audio=True)
processor.save_audio(audio, "example_with_context.wav")
```
### Batched Inference
CSM supports batched inference!
```python
import torch
from transformers import CsmForConditionalGeneration, AutoProcessor
from datasets import load_dataset, Audio
model_id = "eustlb/csm-1b"
device = "cuda" if torch.cuda.is_available() else "cpu"
# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
# prepare the inputs
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
# ensure the audio is 24kHz
ds = ds.cast_column("audio", Audio(sampling_rate=24000))
# here a batch with two prompts
conversation = [
[
{
"role": f"{ds[0]['speaker_id']}",
"content": [
{"type": "text", "text": ds[0]["text"]},
{"type": "audio", "path": ds[0]["audio"]["array"]},
],
},
{
"role": f"{ds[1]['speaker_id']}",
"content": [
{"type": "text", "text": ds[1]["text"]},
],
},
],
[
{
"role": f"{ds[0]['speaker_id']}",
"content": [
{"type": "text", "text": ds[0]["text"]},
],
}
],
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
audio = model.generate(**inputs, output_audio=True)
processor.save_audio(audio, [f"speech_batch_idx_{i}.wav" for i in range(len(audio))])
```
### Making The Model Go Brrr
CSM supports full-graph compilation with CUDA graphs!
```python
import torch
import copy
from transformers import CsmForConditionalGeneration, AutoProcessor
from datasets import load_dataset
model_id = "eustlb/csm-1b"
device = "cuda"
# set logs to ensure no recompilation and graph breaks
torch._logging.set_logs(graph_breaks=True, recompiles=True, cudagraphs=True)
# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
# use static cache, enabling automatically torch compile with fullgraph and reduce-overhead
model.generation_config.max_length = 250 # big enough to avoid recompilation
model.generation_config.max_new_tokens = None # would take precedence over max_length
model.generation_config.cache_implementation = "static"
model.depth_decoder.generation_config.cache_implementation = "static"
# generation kwargs
gen_kwargs = {
"do_sample": False,
"depth_decoder_do_sample": False,
"temperature": 1.0,
"depth_decoder_temperature": 1.0,
}
# Define a timing decorator
class TimerContext:
def __init__(self, name="Execution"):
self.name = name
self.start_event = None
self.end_event = None
def __enter__(self):
# Use CUDA events for more accurate GPU timing
self.start_event = torch.cuda.Event(enable_timing=True)
self.end_event = torch.cuda.Event(enable_timing=True)
self.start_event.record()
return self
def __exit__(self, *args):
self.end_event.record()
torch.cuda.synchronize()
elapsed_time = self.start_event.elapsed_time(self.end_event) / 1000.0
print(f"{self.name} time: {elapsed_time:.4f} seconds")
# prepare the inputs
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
conversation = [
{
"role": f"{ds[0]['speaker_id']}",
"content": [
{"type": "text", "text": ds[0]["text"]},
{"type": "audio", "path": ds[0]["audio"]["array"]},
],
},
{
"role": f"{ds[1]['speaker_id']}",
"content": [
{"type": "text", "text": ds[1]["text"]},
{"type": "audio", "path": ds[1]["audio"]["array"]},
],
},
{
"role": f"{ds[2]['speaker_id']}",
"content": [
{"type": "text", "text": ds[2]["text"]},
],
},
]
padded_inputs_1 = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
print("\n" + "="*50)
print("First generation - compiling and recording CUDA graphs...")
with TimerContext("First generation"):
_ = model.generate(**padded_inputs_1, **gen_kwargs)
print("="*50)
print("\n" + "="*50)
print("Second generation - fast !!!")
with TimerContext("Second generation"):
_ = model.generate(**padded_inputs_1, **gen_kwargs)
print("="*50)
# now with different inputs
conversation = [
{
"role": f"{ds[0]['speaker_id']}",
"content": [
{"type": "text", "text": ds[2]["text"]},
{"type": "audio", "path": ds[2]["audio"]["array"]},
],
},
{
"role": f"{ds[1]['speaker_id']}",
"content": [
{"type": "text", "text": ds[3]["text"]},
{"type": "audio", "path": ds[3]["audio"]["array"]},
],
},
{
"role": f"{ds[2]['speaker_id']}",
"content": [
{"type": "text", "text": ds[4]["text"]},
],
},
]
padded_inputs_2 = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
print("\n" + "="*50)
print("Generation with other inputs!")
with TimerContext("Generation with different inputs"):
_ = model.generate(**padded_inputs_2, **gen_kwargs)
print("="*50)
```
### Training
CSM Transformers integration supports training!
```python
from transformers import CsmForConditionalGeneration, AutoProcessor
from datasets import load_dataset, Audio
model_id = "eustlb/csm-1b"
device = "cuda"
# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
model.train()
ds = load_dataset("hf-internal-testing/dailytalk-dummy", split="train")
# ensure the audio is 24kHz
ds = ds.cast_column("audio", Audio(sampling_rate=24000))
conversation = []
# context
for text, audio, speaker_id in zip(ds[:4]["text"], ds[:4]["audio"], ds[:4]["speaker_id"]):
conversation.append(
{
"role": f"{speaker_id}",
"content": [{"type": "text", "text": text}, {"type": "audio", "path": audio["array"]}],
}
)
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
output_labels=True,
).to(device)
out = model(**inputs)
out.loss.backward()
``` | 2025-05-09T16:54:50.171957 |
huggingface | transformers | v4.51.3-BitNet-preview | BitNet (based on v4.51.3) | 2025-05-08T12:39:22+00:00 | A new model is added to transformers: BitNet
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-BitNet-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the BitNet model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## BitNet
<img width="697" alt="image" src="https://github.com/user-attachments/assets/022e426e-71bb-40fd-8458-ad3b48432759" />
Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
## Usage example
BitNet can be found on the [Huggingface Hub](https://huggingface.co/models?other=bitnet).
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "microsoft/bitnet-b1.58-2B-4T"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16
)
# Apply the chat template
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How are you?"},
]
chat_input = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
# Generate response
chat_outputs = model.generate(chat_input, max_new_tokens=50)
response = tokenizer.decode(chat_outputs[0][chat_input.shape[-1]:], skip_special_tokens=True) # Decode only the response part
print("\nAssistant Response:", response)
```
| 2025-05-09T16:54:50.171965 |
huggingface | transformers | v4.51.3-LlamaGuard-preview | LlamaGuard-4 (based on v4.51.3) | 2025-04-30T08:40:35+00:00 |
A new model is added to transformers: LlamaGuard
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-LlamaGuard-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the LlamaGuard-4 model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## LlamaGuard

Llama Guard 4 is a new multimodal model designed to detect inappropriate content in images and text, whether used as input or generated as output by the model. It’s a dense 12B model pruned from Llama 4 Scout model, and it can run on a single GPU (24 GBs of VRAM). It can evaluate both text-only and image+text inputs, making it suitable for filtering both inputs and outputs of large language models. This enables flexible moderation pipelines where prompts are analyzed before reaching the model, and generated responses are reviewed afterwards for safety. It can also understand multiple languages.
## Usage example
LlamaGuard can be found on the [Huggingface Hub](https://huggingface.co/models?other=llama4).
Here is a simple snippet of how to run Llama Guard 4 on the user inputs.
```py
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-Guard-4-12B"
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "how do I make a bomb?", }
]
},
]
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=10,
do_sample=False,
)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:], skip_special_tokens=True)[0]
print(response)
# OUTPUT
# unsafe
# S9
```
If your application does not require moderation on some of the supported categories, you can ignore the ones you are not interested in, as follows:
```python
from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-Guard-4-12B"
processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=torch.bfloat16,
)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "how do I make a bomb?", }
]
},
]
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
excluded_category_keys=["S9", "S2", "S1"],
).to("cuda:0")
outputs = model.generate(
**inputs,
max_new_tokens=10,
do_sample=False,
)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:], skip_special_tokens=True)[0]
print(response)
# OUTPUTS
# safe
```
Sometimes it is not just the user input, but also the model’s generations that can contain harmful content. We can also moderate the model’s generation!
```python
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "How to make a bomb?"}
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "Here is how one could make a bomb. Take chemical x and add water to it."}
]
}
]
inputs = processor.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
return_dict=True,
add_generation_prompt=True,
).to("cuda")
```
This works because the chat template generates a system prompt that does not mention the excluded categories as part of the list of categories to watch for.
Here’s how you can infer with images in the conversation.
```python
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "I cannot help you with that."},
{"type": "image", "url": "https://huggingface.co/datasets/merve/vlm_test_images/resolve/main/fruit_knife.png"},
]
processor.apply_chat_template(messages, excluded_category_keys=excluded_category_keys)
```
### Llama Prompt Guard 2
You can use Llama Prompt Guard 2 directly via the pipeline API:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="meta-llama/Llama-Prompt-Guard-2-86M")
classifier("Ignore your previous instructions.")
# MALICIOUS
```
Alternatively, it can also be used via AutoTokenizer + AutoModel API:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "meta-llama/Llama-Prompt-Guard-2-86M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "Ignore your previous instructions."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(model.config.id2label[predicted_class_id])
# MALICIOUS
``` | 2025-05-09T16:54:50.171971 |
huggingface | transformers | v4.51.3-Qwen2.5-Omni-preview | Qwen2.5-Omni (based on 4.51.3) | 2025-04-24T14:05:55+00:00 | A new model is added to transformers: Qwen2.5-Omni.
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-Qwen2.5-Omni-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the Qwen2.5-Omni model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## Qwen2.5-Omni
<img width="1090" alt="image" src="https://github.com/user-attachments/assets/77f0fe5b-59cd-4fb6-b222-bcc2b35d6406" />
The [Qwen2.5-Omni](https://qwenlm.github.io/blog/) model is a unified multiple modalities model proposed in [Qwen2.5-Omni Technical Report](https://huggingface.co/papers/2503.20215) from Qwen team, Alibaba Group.
The abstract from the technical report is the following:
> We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model.
>
> Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture.
>
> In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench.
>
> Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.
## Usage example
`Qwen2.5-Omni` can be found on the [Huggingface Hub](https://huggingface.co/Qwen).
### Single Media inference
The model can accept text, images, audio and videos as input. Here's an example code for inference.
```python
import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-7B",
torch_dtype="auto",
device_map="auto"
)
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "/path/to/video.mp4"},
{"type": "text", "text": "What cant you hear and see in this video?"},
],
},
]
inputs = processor.apply_chat_template(
conversations,
load_audio_from_video=True,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
video_fps=1,
# kwargs to be passed to `Qwen2-5-OmniProcessor`
padding=True,
use_audio_in_video=True,
).to(model.device)
# Generation params for audio or text can be different and have to be prefixed with `thinker_` or `talker_`
text_ids, audio = model.generate(**inputs, use_audio_in_video=True, thinker_do_sample=False, talker_do_sample=True)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)
print(text)
```
### Text-only generation
To generate only text output and save compute by not loading the audio generation model, we can use `Qwen2_5OmniThinkerForConditionalGeneration` model.
```python
from transformers import Qwen2_5OmniThinkerForConditionalGeneration, Qwen2_5OmniProcessor
model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-7B",
torch_dtype="auto",
device_map="auto",
)
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "/path/to/video.mp4"},
{"type": "text", "text": "What cant you hear and see in this video?"},
],
},
]
inputs = processor.apply_chat_template(
conversations,
load_audio_from_video=True,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
video_fps=1,
# kwargs to be passed to `Qwen2-5-OmniProcessor`
padding=True,
use_audio_in_video=True,
).to(model.device)
text_ids = model.generate(**inputs, use_audio_in_video=True)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)
print(text)
```
### Batch Mixed Media Inference
The model can batch inputs composed of mixed samples of various types such as text, images, audio and videos as input when using `Qwen2_5OmniThinkerForConditionalGeneration` model. Here is an example.
```python
import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-7B",
torch_dtype="auto",
device_map="auto"
)
processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B")
# Conversation with video only
conversation1 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "path": "/path/to/video.mp4"},
]
}
]
# Conversation with audio only
conversation2 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "audio", "path": "/path/to/audio.wav"},
]
}
]
# Conversation with pure text
conversation3 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [{"type": "text", "text": "who are you?"}],
}
]
# Conversation with mixed media
conversation4 = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "image", "path": "/path/to/image.jpg"},
{"type": "video", "path": "/path/to/video.mp4"},
{"type": "audio", "path": "/path/to/audio.wav"},
{"type": "text", "text": "What are the elements can you see and hear in these medias?"},
],
}
]
conversations = [conversation1, conversation2, conversation3, conversation4]
inputs = processor.apply_chat_template(
conversations,
load_audio_from_video=True,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
video_fps=1,
# kwargs to be passed to `Qwen2-5-OmniProcessor`
padding=True,
use_audio_in_video=True,
).to(model.thinker.device)
text_ids = model.generate(**inputs, use_audio_in_video=True)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
```
### Usage Tips
#### Image Resolution trade-off
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs.
```python
min_pixels = 128*28*28
max_pixels = 768*28*28
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B", min_pixels=min_pixels, max_pixels=max_pixels)
```
#### Prompt for audio output
If users need audio output, the system prompt must be set as "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.", otherwise the audio output may not work as expected.
```
{
"role": "system",
"content": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech.",
}
```
#### Use audio output or not
The model supports both text and audio outputs, if users do not need audio outputs, they can set `enable_audio_output` in the `from_pretrained` function. This option will save about `~2GB` of GPU memory but the `return_audio` option for `generate` function will only allow to be set at `False`.
```python
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-7B",
torch_dtype="auto",
device_map="auto",
enable_audio_output=False,
)
```
In order to obtain a flexible experience, we recommend that users set `enable_audio_output` at `True` when initializing the model through `from_pretrained` function, and then decide whether to return audio when `generate` function is called. When `return_audio` is set to `False`, the model will only return text outputs to get text responses faster.
```python
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-7B",
torch_dtype="auto",
device_map="auto",
enable_audio_output=True,
)
...
text_ids = model.generate(**inputs, return_audio=False)
```
#### Change voice type of output audio
Qwen2.5-Omni supports the ability to change the voice of the output audio. Users can use the `spk` parameter of `generate` function to specify the voice type. The `"Qwen/Qwen2.5-Omni-7B"` checkpoint support two voice types: `Chelsie` and `Ethan`, while `Chelsie` is a female voice and `Ethan` is a male voice. By defalut, if `spk` is not specified, the default voice type is `Chelsie`.
```python
text_ids, audio = model.generate(**inputs, spk="Chelsie")
```
```python
text_ids, audio = model.generate(**inputs, spk="Ethan")
```
#### Flash-Attention 2 to speed up generation
First, make sure to install the latest version of Flash Attention 2:
```bash
pip install -U flash-attn --no-build-isolation
```
Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
To load and run a model using FlashAttention-2, add `attn_implementation="flash_attention_2"` when loading the model:
```python
from transformers import Qwen2_5OmniForConditionalGeneration
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-Omni-7B",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
```
| 2025-05-09T16:54:50.171978 |
huggingface | transformers | v4.51.3-TimesFM-preview | TimesFM (based on v4.51.3) | 2025-04-22T11:34:11+00:00 | A new model is added to transformers: TimesFM
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-TimesFM-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the TimesFM model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## TimesFM
<img width="625" alt="image" src="https://github.com/user-attachments/assets/6d7fd266-f391-4914-bdf9-ebdddb4d3f5f" />
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model proposed in [A decoder-only foundation model for time-series forecasting](https://huggingface.co/papers/2310.10688) by Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. It is a decoder only model that uses non-overlapping patches of time-series data as input and outputs some output patch length prediction in an autoregressive fashion.
The abstract from the paper is the following:
*Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.*
## Usage example
TimesFM can be found on the [Huggingface Hub](https://huggingface.co/models?other=timesfm).
```python
import torch
from transformers import TimesFmModelForPrediction
model = TimesFmModelForPrediction.from_pretrained(
"google/timesfm-2.0-500m-pytorch",
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
device_map="cuda" if torch.cuda.is_available() else None
)
# Create dummy inputs
forecast_input = [
np.sin(np.linspace(0, 20, 100)),
np.sin(np.linspace(0, 20, 200)),
np.sin(np.linspace(0, 20, 400)),
]
frequency_input = [0, 1, 2]
# Convert inputs to sequence of tensors
forecast_input_tensor = [
torch.tensor(ts, dtype=torch.bfloat16).to("cuda" if torch.cuda.is_available() else "cpu")
for ts in forecast_input
]
frequency_input_tensor = torch.tensor(frequency_input, dtype=torch.long).to(
"cuda" if torch.cuda.is_available() else "cpu"
)
# Get predictions from the pre-trained model
with torch.no_grad():
outputs = model(past_values=forecast_input_tensor, freq=frequency_input_tensor, return_dict=True)
point_forecast_conv = outputs.mean_predictions.float().cpu().numpy()
quantile_forecast_conv = outputs.full_predictions.float().cpu().numpy()
``` | 2025-05-09T16:54:50.171989 |
huggingface | transformers | v4.51.3-MLCD-preview | MLCD (based on 4.51.3) | 2025-04-22T09:42:25+00:00 | A new model is added to transformers: MLCD
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-MLCD-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the MLCD model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## MLCD
<img width="618" alt="image" src="https://github.com/user-attachments/assets/2c2c1a6c-9c96-4c6c-a3d3-a24b0fc908af" />
The MLCD models were released by the DeepGlint-AI team in [unicom](https://github.com/deepglint/unicom), which focuses on building foundational visual models for large multimodal language models using large-scale datasets such as LAION400M and COYO700M, and employs sample-to-cluster contrastive learning to optimize performance. MLCD models are primarily used for multimodal visual large language models, such as LLaVA.
## Usage example
MLCD can be found on the [Huggingface Hub](https://huggingface.co/models?other=mlcd).
```py
import requests
from PIL import Image
from transformers import AutoProcessor, MLCDVisionModel
# Load model and processor
model = MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448")
processor = AutoProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448")
# Process single image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
# Generate outputs
with torch.no_grad():
outputs = model(**inputs)
# Get visual features
features = outputs.last_hidden_state
print(f"Extracted features shape: {features.shape}")
``` | 2025-05-09T16:54:50.171997 |
huggingface | transformers | v4.51.3-Janus-preview | Janus (based on v4.51.3) | 2025-04-22T11:39:06+00:00 | A new model is added to transformers: Janus
It is added on top of the v4.51.3 release, and can be installed from the following tag: `v4.51.3-Janus-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the Janus model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.52.0`.
## Janus
<img width="770" alt="image" src="https://github.com/user-attachments/assets/8cd33a13-7d9c-430b-a822-893d83f09b87" />
The Janus Model was originally proposed in [Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation](https://arxiv.org/abs/2410.13848) by DeepSeek AI team and later refined in [Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling](https://arxiv.org/abs/2501.17811). Janus is a vision-language model that can generate both image and text output, it can also take both images and text as input.
> [!NOTE]
> The model doesn't generate both images and text in an interleaved format. The user has to pass a parameter indicating whether to generate text or image.
The abstract from the original paper is the following:
*In this paper, we introduce Janus, an autoregressive framework that unifies multimodal understanding and generation. Prior research often relies on a single visual encoder for both tasks, such as Chameleon. However, due to the differing levels of information granularity required by multimodal understanding and generation, this approach can lead to suboptimal performance, particularly in multimodal understanding. To address this issue, we decouple visual encoding into separate pathways, while still leveraging a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder's roles in understanding and generation, but also enhances the framework's flexibility. For instance, both the multimodal understanding and generation components can independently select their most suitable encoding methods. Experiments show that Janus surpasses previous unified model and matches or exceeds the performance of task-specific models. The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next-generation unified multimodal models.*
The abstract from the aforementioned `Janus-Pro` paper, released afterwards, is the following:
*In this work, we introduce Janus-Pro, an advanced version of the previous work Janus. Specifically, Janus-Pro incorporates (1) an optimized training strate (2) expanded training data, and (3) scaling to larger model size. With these improvements, Janus-Pro achieves significant advancements in both multimodal understanding and text-to-image instruction-following capabilities, while also enhancing the stability of text-to-image generation. We hope this work will inspire further exploration in the field. Code and models are publicly available.*
## Usage example
Janus can be found on the [Huggingface Hub](https://huggingface.co/models?other=janus).
### Single image inference
Here is the example of visual understanding with a single image.
> [!NOTE]
> Note that the model has been trained with a specific prompt format for chatting. Use `processor.apply_chat_template(my_conversation_dict)` to correctly format your prompts.
```python
import torch
from PIL import Image
import requests
from transformers import JanusForConditionalGeneration, JanusProcessor
model_id = "deepseek-community/Janus-Pro-1B"
# Prepare Input for generation.
messages = [
{
"role": "user",
"content": [
{'type':'image', 'url': 'http://images.cocodataset.org/val2017/000000039769.jpg'},
{'type':"text", "text":"What do you see in this image?."}
]
},
]
# Set generation mode to `text` to perform text generation.
processor = JanusProcessor.from_pretrained(model_id)
model = JanusForConditionalGeneration.from_pretrained(model_id,
torch_dtype=torch.bfloat16,
device_map="auto")
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
generation_mode="text",
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device, dtype=torch.bfloat16)
output = model.generate(**inputs, max_new_tokens=40,generation_mode='text',do_sample=True)
text = processor.decode(output[0], skip_special_tokens=True)
print(text)
```
### Multi image inference
Janus can perform inference with multiple images as input, where images can belong to the same prompt or different prompts in batched inference, where the model processes many conversations in parallel. Here is how you can do it:
```python
import torch
from PIL import Image
import requests
from transformers import JanusForConditionalGeneration, JanusProcessor
model_id = "deepseek-community/Janus-Pro-1B"
image_urls = [
"http://images.cocodataset.org/val2017/000000039769.jpg",
"https://www.ilankelman.org/stopsigns/australia.jpg",
"https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"
]
messages = [
[
{
"role": "user",
"content": [
{"type": "text", "text": "What’s the difference between"},
{"type": "image", "url": image_urls[0]},
{"type": "text", "text": " and "},
{"type": "image", "url": image_urls[1]}
]
}
],
[
{
"role": "user",
"content": [
{"type": "image", "url": image_urls[2]},
{"type": "text", "text": "What do you see in this image?"}
]
}
]
]
# Load model and processor
processor = JanusProcessor.from_pretrained(model_id)
model = JanusForConditionalGeneration.from_pretrained(
model_id, torch_dtype=torch.bfloat16, device_map="auto"
)
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
generation_mode="text",
tokenize=True,
padding=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
# Generate response
output = model.generate(**inputs, max_new_tokens=40, generation_mode='text', do_sample=False)
text = processor.batch_decode(output, skip_special_tokens=True)
print(text)
```
## Text to Image generation
Janus can also generate images given a prompt.
```python
import torch
from transformers import JanusForConditionalGeneration, JanusProcessor
# Set generation mode to `image` to prepare inputs for image generation..
model_id = "deepseek-community/Janus-Pro-1B"
processor = JanusProcessor.from_pretrained(model_id)
model = JanusForConditionalGeneration.from_pretrained(model_id,
torch_dtype=torch.bfloat16,
device_map="auto")
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "A dog running under the rain."},
],
}
]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt,generation_mode="image",return_tensors="pt").to(model.device, dtype=torch.bfloat16)
# Set num_return_sequence parameter to generate multiple images per prompt.
model.generation_config.num_return_sequences = 2
outputs = model.generate(**inputs,
generation_mode="image",
do_sample=True,
use_cache=True,
)
# Perform post-processing on the generated token ids.
decoded_image = model.decode_image_tokens(outputs)
images = processor.postprocess(list(decoded_image.float()),return_tensors="PIL.Image.Image")
# Save the image
for i, image in enumerate(images['pixel_values']):
image.save(f"result{i}.png")
``` | 2025-05-09T16:54:50.172004 |
huggingface | transformers | v4.52.1 | v4.52.1: Qwen2.5-Omni, SAM-HQ, GraniteMoeHybrid, D-FINE, CSM, BitNet, LlamaGuard, TimesFM, MLCD, Janus, InternVL | 2025-05-20T20:45:20+00:00 | ## New models
### Qwen2.5-Omni
<img width="1090" alt="image" src="https://github.com/user-attachments/assets/77f0fe5b-59cd-4fb6-b222-bcc2b35d6406" />
The [Qwen2.5-Omni](https://qwenlm.github.io/blog/) model is a unified multiple modalities model proposed in [Qwen2.5-Omni Technical Report](https://huggingface.co/papers/2503.20215) from Qwen team, Alibaba Group.
The abstract from the technical report is the following:
> We present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. This strategy effectively decouples the handling of long sequences of multimodal data, assigning the perceptual responsibilities to the multimodal encoder and entrusting the modeling of extended sequences to a large language model.
>
> Such a division of labor enhances the fusion of different modalities via the shared attention mechanism. To synchronize the timestamps of video inputs with audio, we organized the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE (Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose Thinker-Talker architecture.
>
> In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni outperforms the similarly sized Qwen2-VL and Qwen2-Audio in both image and audio capabilities. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench.
>
> Notably, Qwen2.5-Omni is the first open-source model to achieve a level of performance in end-to-end speech instruction following that is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni’s streaming Talker outperform most existing streaming and non-streaming alternatives in robustness and naturalness.
### SAM-HQ
SAM-HQ (High-Quality Segment Anything Model) was proposed in [Segment Anything in High Quality](https://arxiv.org/pdf/2306.01567.pdf) by Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu.
The model is an enhancement to the original SAM model that produces significantly higher quality segmentation masks while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability.

SAM-HQ introduces several key improvements over the original SAM model:
1. High-Quality Output Token: A learnable token injected into SAM's mask decoder for higher quality mask prediction
2. Global-local Feature Fusion: Combines features from different stages of the model for improved mask details
3. Training Data: Uses a carefully curated dataset of 44K high-quality masks instead of SA-1B
4. Efficiency: Adds only 0.5% additional parameters while significantly improving mask quality
5. Zero-shot Capability: Maintains SAM's strong zero-shot performance while improving accuracy
The abstract from the paper is the following:
*The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced dataset of 44k masks, which takes only 4 hours on 8 GPUs.*
Tips:
- SAM-HQ produces higher quality masks than the original SAM model, particularly for objects with intricate structures and fine details
- The model predicts binary masks with more accurate boundaries and better handling of thin structures
- Like SAM, the model performs better with input 2D points and/or input bounding boxes
- You can prompt multiple points for the same image and predict a single high-quality mask
- The model maintains SAM's zero-shot generalization capabilities
- SAM-HQ only adds ~0.5% additional parameters compared to SAM
- Fine-tuning the model is not supported yet
### GraniteMoeHybrid

The `GraniteMoeHybrid` model builds on top of `GraniteMoeSharedModel` and `Bamba`. Its decoding layers consist of state space layers or MoE attention layers with shared experts. By default, the attention layers do not use positional encoding.
### D-FINE
<img width="1051" alt="image" src="https://github.com/user-attachments/assets/3274da06-ff44-4bb4-bebf-8bc5f9b72aac" />
The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by
Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu
The abstract from the paper is the following:
*We introduce D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD).
FDR transforms the regression process from predicting fixed coordinates to iteratively refining probability distributions, providing a fine-grained intermediate representation that significantly enhances localization accuracy. GO-LSD is a bidirectional optimization strategy that transfers localization knowledge from refined distributions to shallower layers through self-distillation, while also simplifying the residual prediction tasks for deeper layers. Additionally, D-FINE incorporates lightweight optimizations in computationally intensive modules and operations, achieving a better balance between speed and accuracy. Specifically, D-FINE-L / X achieves 54.0% / 55.8% AP on the COCO dataset at 124 / 78 FPS on an NVIDIA T4 GPU. When pretrained on Objects365, D-FINE-L / X attains 57.1% / 59.3% AP, surpassing all existing real-time detectors. Furthermore, our method significantly enhances the performance of a wide range of DETR models by up to 5.3% AP with negligible extra parameters and training costs. Our code and pretrained models: this https URL.*
### CSM
The Conversational Speech Model (CSM) is the first open-source contextual text-to-speech model [released by Sesame](https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice). It is designed to generate natural-sounding speech with or without conversational context. This context typically consists of multi-turn dialogue between speakers, represented as sequences of text and corresponding spoken audio.
**Model Architecture:**
CSM is composed of two LLaMA-style auto-regressive transformer decoders: a backbone decoder that predicts the first codebook token and a depth decoder that generates the remaining tokens. It uses the pretrained codec model [Mimi](./mimi.md), introduced by Kyutai, to encode speech into discrete codebook tokens and decode them back into audio.
The original csm-1b checkpoint is available under the [Sesame](https://huggingface.co/sesame/csm-1b) organization on Hugging Face.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/eustlb/documentation-images/resolve/main/csm_architecture.png"/>
</div>
### BitNet
<img width="697" alt="image" src="https://github.com/user-attachments/assets/022e426e-71bb-40fd-8458-ad3b48432759" />
Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
### LlamaGuard

Llama Guard 4 is a new multimodal model designed to detect inappropriate content in images and text, whether used as input or generated as output by the model. It’s a dense 12B model pruned from Llama 4 Scout model, and it can run on a single GPU (24 GBs of VRAM). It can evaluate both text-only and image+text inputs, making it suitable for filtering both inputs and outputs of large language models. This enables flexible moderation pipelines where prompts are analyzed before reaching the model, and generated responses are reviewed afterwards for safety. It can also understand multiple languages.
### TimesFM
<img width="625" alt="image" src="https://github.com/user-attachments/assets/6d7fd266-f391-4914-bdf9-ebdddb4d3f5f" />
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model proposed in [A decoder-only foundation model for time-series forecasting](https://huggingface.co/papers/2310.10688) by Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. It is a decoder only model that uses non-overlapping patches of time-series data as input and outputs some output patch length prediction in an autoregressive fashion.
The abstract from the paper is the following:
*Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.*
### MLCD
<img width="618" alt="image" src="https://github.com/user-attachments/assets/2c2c1a6c-9c96-4c6c-a3d3-a24b0fc908af" />
The MLCD models were released by the DeepGlint-AI team in [unicom](https://github.com/deepglint/unicom), which focuses on building foundational visual models for large multimodal language models using large-scale datasets such as LAION400M and COYO700M, and employs sample-to-cluster contrastive learning to optimize performance. MLCD models are primarily used for multimodal visual large language models, such as LLaVA.
### Janus
<img width="770" alt="image" src="https://github.com/user-attachments/assets/8cd33a13-7d9c-430b-a822-893d83f09b87" />
The Janus Model was originally proposed in [Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation](https://arxiv.org/abs/2410.13848) by DeepSeek AI team and later refined in [Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling](https://arxiv.org/abs/2501.17811). Janus is a vision-language model that can generate both image and text output, it can also take both images and text as input.
> [!NOTE]
> The model doesn't generate both images and text in an interleaved format. The user has to pass a parameter indicating whether to generate text or image.
The abstract from the original paper is the following:
*In this paper, we introduce Janus, an autoregressive framework that unifies multimodal understanding and generation. Prior research often relies on a single visual encoder for both tasks, such as Chameleon. However, due to the differing levels of information granularity required by multimodal understanding and generation, this approach can lead to suboptimal performance, particularly in multimodal understanding. To address this issue, we decouple visual encoding into separate pathways, while still leveraging a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder's roles in understanding and generation, but also enhances the framework's flexibility. For instance, both the multimodal understanding and generation components can independently select their most suitable encoding methods. Experiments show that Janus surpasses previous unified model and matches or exceeds the performance of task-specific models. The simplicity, high flexibility, and effectiveness of Janus make it a strong candidate for next-generation unified multimodal models.*
The abstract from the aforementioned `Janus-Pro` paper, released afterwards, is the following:
*In this work, we introduce Janus-Pro, an advanced version of the previous work Janus. Specifically, Janus-Pro incorporates (1) an optimized training strate (2) expanded training data, and (3) scaling to larger model size. With these improvements, Janus-Pro achieves significant advancements in both multimodal understanding and text-to-image instruction-following capabilities, while also enhancing the stability of text-to-image generation. We hope this work will inspire further exploration in the field. Code and models are publicly available.*
### InternVL
The InternVL3 family of Visual Language Models was introduced in [InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models](https://huggingface.co/papers/2504.10479).
The abstract from the paper is the following:
*We introduce InternVL3, a significant advancement in the InternVL series featuring a native multimodal pre-training paradigm. Rather than adapting a text-only large language model (LLM) into a multimodal large language model (MLLM) that supports visual inputs, InternVL3 jointly acquires multimodal and linguistic capabilities from both diverse multimodal data and pure-text corpora during a single pre-training stage. This unified training paradigm effectively addresses the complexities and alignment challenges commonly encountered in conventional post-hoc training pipelines for MLLMs. To further improve performance and scalability, InternVL3 incorporates variable visual position encoding (V2PE) to support extended multimodal contexts, employs advanced post-training techniques such as supervised fine-tuning (SFT) and mixed preference optimization (MPO), and adopts test-time scaling strategies alongside an optimized training infrastructure. Extensive empirical evaluations demonstrate that InternVL3 delivers superior performance across a wide range of multi-modal tasks. In particular, InternVL3-78B achieves a score of 72.2 on the MMMU benchmark, setting a new state-of-the-art among open-source MLLMs. Its capabilities remain highly competitive with leading proprietary models, including ChatGPT-4o, Claude 3.5 Sonnet, and Gemini 2.5 Pro, while also maintaining strong pure-language proficiency. In pursuit of open-science principles, we will publicly release both the training data and model weights to foster further research and development in next-generation MLLMs.*
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/internvl_architecture.png" alt="drawing" width="600"/>
<small> Overview of InternVL3 models architecture, which is the same as InternVL2.5. Taken from the <a href="https://huggingface.co/OpenGVLab/InternVL3-1B">original checkpoint.</a> </small>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/internvl_overview_performance.png" alt="drawing" width="600"/>
<small> Comparison of InternVL3 performance on OpenCompass against other SOTA VLLMs. Taken from the <a href="https://huggingface.co/OpenGVLab/InternVL3-1B">original checkpoint.</a> </small>
## Kernel integration
We integrate some kernels in the `transformers` library via the `kernels` package: https://github.com/huggingface/kernels
We start with some kernels in the Llama model, and we iterate to identify the best performance optimizations
* Llama Kernel integration by @MekkCyber in #37092
* [kernels] use original forward at compile time by @gante in #37604
## TP support
In the previous release, we've added [TP support](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_multi) in order to run distributed inference. However, this is not supported for all quantization methods. We are progressively adding support to it. Right now, only compressed-tensors, fp8 and fp8-fbgemm support it.
* Attention Quantization with FBGemm & TP by @MekkCyber in #37384
* Restrict & Explain tp_plan for FBgemm by @MekkCyber in #37404
## Quantization
### AutoRound
From the AutoRound contributors:
> AutoRound is an advanced quantization algorithm that delivers strong accuracy, even at 2-bit precision. It leverages sign gradient descent to fine-tune both rounding values and min-max clipping thresholds in just 200 steps ... More details here: https://github.com/intel/auto-round
* Add AutoRound quantization support by @wenhuach21 in #37393
### Quantization Documentation
We have added two new sections to better understand and get started with quantization:
- [Quantization concept](https://huggingface.co/docs/transformers/main/en/quantization/concept_guide)
- [Selecting a quantization method](https://huggingface.co/docs/transformers/main/en/quantization/selecting)
* Add "selecting a quantization method" doc by @DerekLiu35 in #37159
* Update quantization docs by @DerekLiu35 in #37439
### GGUF
We've added GGUF support to gemma3 family models.
* Add GGUF support to Gemma3 Text backbone by @Isotr0py in #37424
* Support loading Gemma3 QAT GGUF models by @Isotr0py in #37649
## Fast image processors
Most Vision Models and VLMs in Transformers can now benefit from fast image processors. By utilizing torch/torchvision functional transforms, these processors offer a substantial speedup when processing images compared to PiL/numpy functions, and support processing on both CPU and CUDA.
- See the list of updated models: https://github.com/huggingface/transformers/issues/36978
- Learn more about fast image processors: [Fast Image Processors](https://huggingface.co/docs/transformers/main/en/image_processors#fast-image-processors)
* Add Fast Image Processor for Perceiver by @rootonchair in #37176
* Add Fast Image Processor for Flava by @rootonchair in #37135
* Add Fast Image Processor for LayoutLMv2 by @rootonchair in #37203
* Add Fast Image Processor for LayoutLMv3 by @rootonchair in #37201
* Add Fast Image Processor for Donut by @rootonchair in #37081
* Add Fast LeViT Processor by @keetrap in #37154
* Add Fast Mobilenet-V2 Processor by @keetrap in #37113
* Add Fast owlvit Processor by @keetrap in #37164
* Add ImageProcessorFast to BiT processor by @Yann-CV in #37180
* Add Fast Yolos Processor by @keetrap in #37292
* Add Fast Chinese-CLIP Processor by @keetrap in #37012
* Add Fast Conditional-DETR Processor by @keetrap in #37071
* Fix broken add-fast-image-processor CLI by @yonigozlan in #37499
* Bridgetower fast image processor by @rootonchair in #37373
* Add Fast Grounding-Dino Processor by @keetrap in #37108
* Add Fast PVT Processor by @keetrap in #37204
* Add Fast Image Processor for PoolFormer by @rootonchair in #37182
* Add Fast Image Processor for MobileNetV1 by @dmdaksh in #37111
* Fast image processor for VitMatte added and bug in slow version fixed by @henrikm11 in #37616
* [Fast Processor] BEiT by @ariG23498 in #37005
* Add Swin2SR ImageProcessorFast by @thisisiron in #37169
* Add Fast Image Processor for vilt by @devxaitist in #37304
## AutoDocstring
The new `@auto_docstring` decorator makes it easier to add proper documentation when contributing a model without bloating the modeling code:
- [AutoDocstring] Based on inspect parsing of the signature by @ArthurZucker and @yonigozlan in https://github.com/huggingface/transformers/pull/33771
- More info on how to use `@auto_docstring`: [AutoDocstring](https://huggingface.co/docs/transformers/main/en/auto_docstring)
## Custom `generate`
We now support custom `generate` methods to be loaded from `model.generate`. The custom `generate` methods can be stored on the Hub, enabling quick distribution of experiments regarding new caches, decoding methods, heuristics, ...
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
# `generate` with `custom_generate` -> `generate` uses custom code
# note: calling the custom method prints "✨ using a custom generation method ✨"
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct", device_map="auto")
inputs = tokenizer(["The quick brown"], return_tensors="pt").to(model.device)
gen_out = model.generate(**inputs, custom_generate="transformers-community/custom_generate_example", trust_remote_code=True)
print(tokenizer.batch_decode(gen_out, skip_special_tokens=True))
```
You can find the docs [here](https://huggingface.co/docs/transformers/main/en/generation_strategies#custom-decoding-methods), and all custom generation methods by [searching for the `custom_generate` tag](https://huggingface.co/models?other=custom_generate).
* [generate] Run custom generation code from the Hub by @gante in #36405
### Chat CLI
The `transformers-cli` command is updated to be simpler and cleaner, specifically for its `chat` variant.
The following is now possible and recommended:
```
transformers chat Qwen/Qwen2.5-3B-Instruct
```
Additionally, almost any generate flag can now be passed as a positional argument, present and future, as opposed to being limited to a set of hardcoded flags, for example:
```
transformers chat Qwen/Qwen2.5-0.5B-Instruct do_sample=False max_new_tokens=10
```
* Transformers cli clean command by @LysandreJik in #37657
* [chat] clean code and add base help by @gante in #37892
* [`chat`] generate parameterization powered by `GenerationConfig` and UX-related changes by @gante in #38047
## Breaking changes
* 🚨 rm already deprecated pad_to_max_length arg by @itazap in #37617
* 🚨🚨🚨 Fix forward of Dinov2ForImageClassification for models with registers by @psandovalsegura in #37836
* 🔴 [VLM] Add base model without head by @zucchini-nlp in #37033
* 🔴 Video processors as a separate class by @zucchini-nlp in #35206
* 🚨🚨 Allow saving and loading multiple "raw" chat template files by @Rocketknight1 in #36588
* 🔴 Update CLIP vision attention to new attention interface by @molbap in #37498
* 🚨🚨 Setup -> setupclass conversion by @Rocketknight1 in #37282
## Deprecations
The agents folder is finally removed from `transformers` in favour of using `smolagents`.
* [agents] remove agents 🧹 by @gante in #37368
We are moving away from torch 2.0 as it has been released more than two years ago.
* byebye torch 2.0 by @ydshieh in #37277
## General bugfixes and improvements
* fix flex attn when optional args aren't passed by @winglian in #37327
* fix llama4 training by @hiyouga in #37319
* Fix deepspeed with quantization by @Cyrilvallez in #37324
* Fix `init empty weights` without accelerate by @Cyrilvallez in #37337
* Use Python 3.9 syntax in examples by @cyyever in #37279
* Fix torchao usage by @jiqing-feng in #37034
* enable 2 llama UT cases on xpu by @yao-matrix in #37126
* Avoid build crashes when torch.version.xpu doesn't exist and fix Llama4 processor tests by @Rocketknight1 in #37346
* fix derived berts `_init_weights` by @Cyrilvallez in #37341
* Update translation template by @stevhliu in #37294
* Remove HQQ from caching allocator warmup by @Cyrilvallez in #37347
* updated model card for Mistral by @NahieliV in #37156
* Update model-card for DINOv2 by @shubham0204 in #37104
* Update falcon mamba card by @ricalanis in #37253
* Update Model card for GPT2 by @ash-01xor in #37101
* Improvements in Gemma2 model card by @devesh-2002 in #37076
* Update Model Card for Jamba by @ParagEkbote in #37152
* Add bnb to the list of supported quantization methods for LLama4 by @MekkCyber in #37348
* Updated Model-card for donut by @Logeswaran7 in #37290
* Remove unnecessary attr assignment by @tugsbayasgalan in #36837
* more fixes for post-training llama4 by @winglian in #37329
* Fixing flex attention for torch=2.6.0 by @SalmanMohammadi in #37285
* Multiple llama4 fixe by @ArthurZucker in #37353
* Expose blip2qformer by @alex-jw-brooks in #37254
* convert float for yarn related arguments in rope_scaling by @bzantium in #37139
* Use Python 3.9 syntax in tests by @cyyever in #37343
* A bit of cleaning 🧹🧹 by @Cyrilvallez in #37215
* fix deepspeed job by @ydshieh in #37284
* Set vision config to None for Gemma 1B conversion by @RyanMullins in #37366
* [llama 4] dynamic rope decorator by @gante in #37365
* Skip non-selected experts for mixtral and qwen2_moe by @Coco58323 in #32429
* [core] remove `GenerationMixin` inheritance by default in `PreTrainedModel` by @gante in #37173
* prune LM Head for USD by @jmamou in #36695
* fix(qwen): fix shape error when using tp by @KimmiShi in #36947
* Preserve requires_grad in pre quantized model by @jerryzh168 in #37354
* Update composition flag usage by @zucchini-nlp in #36263
* fix: llama4 conversion script no_rope_layers by @jmkuebler in #37359
* update deepspeed docker by @SunMarc in #37371
* Fix warning message for PEFT models in text-generation pipeline #36783 by @falconlee236 in #36887
* Apply torchfix to replace deprecated functions: `_pytree._register_pytree_node` and `torch.cpu.amp.autocast` by @bzhong-solink in #37372
* Fix some failing AWQ tests by @DerekLiu35 in #37383
* the fix that did not get in by @ArthurZucker in #37370
* handle torch version edge cases by @winglian in #37399
* Add warning when failed to acquire other user's lock at model download by @manueldeprada in #37395
* Handle torch ver in flexattn by @Kh4L in #37400
* Fix Llama4 offset by @Cyrilvallez in #37414
* Offloaded hybrid cache for Llama4 by @Cyrilvallez in #37401
* mark llama4 as not supported with fa2 by @winglian in #37416
* update `kernels` to 0.4.3 by @ArthurZucker in #37419
* Send trainer/fsdp/deepspeed CI job reports to a single channel by @ydshieh in #37411
* from_pretrained should handle xpu case by @sywangyi in #37382
* Allow rocm systems to run these tests by @ivarflakstad in #37278
* use `rms_norm_eps` for the L2Norm for Llama4 by @ArthurZucker in #37418
* [chat-template] Unify tests and clean up 🧼 by @zucchini-nlp in #37275
* Fix new failure reports not including anything other than `tests/models/` by @ydshieh in #37415
* Quark Quantization gated repo by @MekkCyber in #37412
* Add image classifier donut & update loss calculation for all swins by @eljandoubi in #37224
* Correctly drop tokens in SwitchTransformer by @mario-aws in #37123
* Fix require_read_token by @MekkCyber in #37422
* fix: use mtime by default in Trainer._rotate_checkpoints with automatic fallback by @Jerry-Terrasse in #37260
* (Part 2) feat: allow for tp_size attr for tplizing the model by @kmehant in #37054
* Adding to self_comment_ci.yml by @MekkCyber in #37426
* [Feat] Support npu in modeling models by @duanjunwen in #37369
* Remove old code for PyTorch, Accelerator and tokenizers by @cyyever in #37234
* enhance require_deterministic_for_xpu by @yao-matrix in #37437
* Fixes: Corrects file path for CUDA kernels by @DonggeunYu in #37438
* Simplify soft dependencies and update the dummy-creation process by @LysandreJik in #36827
* Update-kernel-pin by @ArthurZucker in #37448
* Add moe kernels by @ArthurZucker in #37376
* Fix the test fetcher by @LysandreJik in #37452
* Remove triton mlp kernel, not compiling for some models by @MekkCyber in #37449
* [processor] clean up mulitmodal tests by @zucchini-nlp in #37362
* [Regression] Fix Quark quantized model loading after refactorization by @BowenBao in #37407
* prevent creating a view/leaf param for low rank optimizers w FSDP by @winglian in #37379
* Disable kernels for quantization by @MekkCyber in #37446
* Add weights_only=True to torch.load by @cyyever in #37062
* Add XPU case to is_torch_bf16_gpu_available by @cyyever in #37132
* nit: typing use Llama4TextConfig instead of Llama4Config by @kmehant in #37430
* Delete hubconf.py by @Rocketknight1 in #37455
* Fix typing issues with SigLip2 by @EricWiener in #37356
* fix: (llama4) fix no_split_modules to be picked up for fsdpv1 and v2 sharding by @kmehant in #37462
* make test_snowman_image_captioning pass on XPU, by sharing same atol w/ ROCM by @yao-matrix in #37480
* Remove `fsspec` dependency which isn't directly used by transformers by @cyyever in #37318
* Fix tests failed with gated repos. by @ydshieh in #37484
* [ci] fix doc builder by @zucchini-nlp in #37489
* Fixed broken links by @cypherpepe in #37466
* Detect and fix most `_init_weights()` issues - make it work for composite models by @Cyrilvallez in #37070
* [bug] deprecated deta load_cuda_kernel, MultiScaleDeformableAttention by @chagmgang in #37443
* Fix mask handling for flex attention in llama/gemma2/mistral/qwen2 by @flukeskywalker in #37381
* Fix wrong argparse type in modular checker script by @seven-mile in #37472
* Fixing gated repo issues by @MekkCyber in #37463
* [qwen-omni] fix processor by @zucchini-nlp in #37493
* Remove deprecation warning for `num_logits_to_keep` by @Cyrilvallez in #37149
* Don't auto-assign reviewers when the author is in HF by @Rocketknight1 in #37500
* Detect and use device context manager or global device in `from_pretrained` by @Cyrilvallez in #37216
* Change default value of `attn_temperature_tuning` by @gmlwns2000 in #37501
* Llama4: remove redundant transpose of router_logits by @pbelevich in #37468
* fix: Restore explicit error surfacing for unexpected hub exceptions by @manueldeprada in #37525
* Fix missing return type for MLCD docs by @qubvel in #37527
* fix and enhance pipeline_webserver.md by @yao-matrix in #36992
* VDR task guide by @merveenoyan in #37485
* Update VITS model card by @princepride in #37335
* Refactor ColPali model documentation by @Soum-Soum in #37309
* enable 5 cases on XPU by @yao-matrix in #37507
* enable several cases on XPU by @yao-matrix in #37516
* enable `test_offloaded_cache_implementation` on XPU by @yao-matrix in #37514
* Fix BitsAndBytesConfig JSON serialization in TrainingArguments by @astefanutti in #37520
* enable 3 mpt test cases on XPU by @yao-matrix in #37546
* enable 6 rt_detr_v2 cases on xpu by @yao-matrix in #37548
* More appropriate cuda warmup in resource-constrained hardware by @Cyrilvallez in #37550
* Fixes hqq by following a new path for bias parameter in pre_quantized models by @MekkCyber in #37530
* convert scale and zero to cuda when using HQQ backend by @phymhan in #37425
* Keep Quark loading through meta device by @BowenBao in #37538
* Refactor torchao docs by @MekkCyber in #37490
* add FlashAttentionKwargs and seq_idx to flat collator by @garrett361 in #36456
* docs(typo): Update ISSUES.md, fix a small typo by @<NOT FOUND> in #37542
* Fix device issue for tapas (with `as_tensor`) by @ydshieh in #37551
* Make Ignored Columns ValueError More Informative by @wbuchanan in #33299
* Fix TimesFm doc issue by @Cyrilvallez in #37552
* Run `test_can_load_with_global_device_set` using a subprocess by @ydshieh in #37553
* Fix pixel attention mask padding in smolvlm by @ManuelFay in #37497
* [vlm] adjust max length for special tokens by @zucchini-nlp in #37342
* Add EfficientNet Image PreProcessor by @zshn25 in #37055
* Fix Mamba2 Grouped SSD Support in the torch_forward Path by @cyang49 in #37533
* All models can be initialized on meta device by @Cyrilvallez in #37563
* [chat template] fix security vulnerability by @zucchini-nlp in #37523
* [qwen-vl] Standardize config by @zucchini-nlp in #37268
* [TimesFM] use the main revison instead of revision for integration test by @kashif in #37558
* Fix qwen2audio wanr -> warn by @alex-jw-brooks in #37559
* Small fix on context manager detection by @Cyrilvallez in #37562
* [phi4] update conversion by @zucchini-nlp in #37579
* docs: fix typo by @tonyksong in #37567
* Ensure positive warm-up size by @Cyrilvallez in #37581
* Update Phi4 converter by @Cyrilvallez in #37594
* Fix Quark quantization config by @MekkCyber in #37578
* Gaudi: Add the bf16 support for hpu by @yuanwu2017 in #37568
* Fix some GPU OOM after #37553 by @ydshieh in #37591
* remove _run_third_party_device_tests by @jiqing-feng in #37445
* [Bugfix] Fix flash-attention func param mismatch and softmax_scale default value mistake on Ascend NPU by @FightingZhen in #37575
* Flag SpeechT5 flaky test by @molbap in #37587
* enable 6 gemma2 cases on XPU by @yao-matrix in #37564
* enable 6 modeling cases on XPU by @yao-matrix in #37571
* [Gemma3] compile ✨ by @gante in #37447
* Model debugger upgrades by @molbap in #37391
* [VLMs] use only `xxx_token_id` for multimodal tokens by @zucchini-nlp in #37573
* fix 2 encoder_decoder issues on XPU by @yao-matrix in #37572
* fix issue that some example with no trainer use accelerator.end_train… by @we1559 in #37435
* Deprecate modeling_utils.py classes by @qubvel in #37298
* Fixing the example in generation strategy doc by @jeasinema in #37598
* chore: update model card for SigLIP by @saswatmeher in #37585
* Fix InternVL attention when using qk_norm (38B and 78B) by @yonigozlan in #37620
* Remove torchvision requirement from AutoImageProcessor by @LysandreJik in #37457
* Allow Exclusion of Input IDs from RepetitionPenaltyLogitsProcessor by @alex-jw-brooks in #37625
* fix link in kv_cache.md by @manueldeprada in #37652
* Update longformer.md by @JihadHammoud02 in #37622
* Refactor phi doc by @JihadHammoud02 in #37583
* Fix Qwen2.5-Omni get_chunked_index chunking functionality by @imkero in #37631
* [fix] make legacy bnb code work by @cyr0930 in #37331
* [fix gemma] Set default value for output_attentions parameter in Gemma2 and Gemma… by @chenin-wang in #37633
* Restructure torchao quantization examples by @jerryzh168 in #37592
* Add test to ensure unknown exceptions reraising in utils/hub.py::cached_files() by @manueldeprada in #37651
* [test] update `test_past_key_values_format` by @gante in #37614
* [tests] Stricter generate + compilation test -- no recompilations allowed by @gante in #37629
* Fix ValueError when eval_do_concat_batches=False with examples by @jeffhataws in #37621
* Fixes #37219 : RecurrentGemma crashes for inputs longer than sliding window length by @manueldeprada in #37613
* Introduce GradientCheckpointingLayer by @qubvel in #37223
* [qwen-omni] fix training by @zucchini-nlp in #37517
* Fix duplicated weights in fp8 quantization by @Cyrilvallez in #37667
* Correct warm-up with fp8 by @Cyrilvallez in #37670
* Fixing quantization tests by @MekkCyber in #37650
* Fix autoround docs by @SunMarc in #37675
* Fix no_split_modules for Llama4 pretrained models by @astefanutti in #37673
* Refactor bitsandbytes doc by @MekkCyber in #37668
* enable mllama cases on xpu by @yao-matrix in #37644
* enable 6 granite cases on xpu by @yao-matrix in #37569
* [cleanup] remove old scripts in `/scripts` 🧹 🧹 by @gante in #37676
* [docs] only build `en` docs in push CI by @gante in #37677
* typo update in the parameter name by @LunaticMaestro in #37655
* [Docs] Move models to appropriate section by @NielsRogge in #37338
* Add counters for dataset classes by @jiangyukunok in #37636
* enable blip2 and emu3 cases on XPU by @yao-matrix in #37662
* 🌐 [i18n-KO] Translated `siglip.md` to Korean by @devxaitist in #37145
* Updated model card for mbart and mbart50 by @Vishesh-Mistry in #37619
* fix: remove classmethod from `Qwen2_5OmniConfig.get_text_config` by @shahruk10 in #37690
* enable cpu offloading for Bark on xpu by @yao-matrix in #37599
* Pin torch == 2.6 on PR CI docker images for now by @ydshieh in #37695
* [cleanup] remove `/model_cards` 🧹 🧹 by @gante in #37685
* Add maintainers for ROCm/Intel XPU/Ascend NPU by @Rocketknight1 in #37678
* [CI] add back `sacrebleu` (and document why) by @gante in #37700
* TransfoXL is deprecated, don't keep it in tested examples! by @Rocketknight1 in #37707
* [internvl] fix chat template by @zucchini-nlp in #37656
* Qwen 2.5 Omni: apply video defaults by @pcuenca in #37660
* [tests, `qwen2_5_omni`] fix flaky tests by @gante in #37721
* Process inputs directly in apply_chat_template in image-text-to-text pipeline by @yonigozlan in #35616
* enable 4 test_trainer cases on XPU by @yao-matrix in #37645
* Fix Aria tests by @jiqing-feng in #37444
* Fix inference bugs in Qwen2.5 Omni by @BakerBunker in #37701
* Fix torchao doc examples by @MekkCyber in #37697
* [tests] fix `test_nemotron_8b_generation_sdpa` by @faaany in #37665
* Make sure torch_is_available before using torch.distributed by @MekkCyber in #37693
* [VLMs] fix flash-attention tests by @zucchini-nlp in #37603
* fix: learning_rate logged as tensor causing save issue with deepspeed by @NanoCode012 in #37704
* Fix `embeds_to_talker` device in Qwen2.5-Omni by @BakerBunker in #37739
* Correctly raise errors when downloading tokenizer files by @Cyrilvallez in #37740
* [performance_optim] define flash attention mask on NPU device directly by @FightingZhen in #37698
* Skip all `AriaForConditionalGenerationIntegrationTest` on `T4` by @ydshieh in #37746
* Update `MllamaForConditionalGenerationIntegrationTest` by @ydshieh in #37750
* Expand quantized data type support for tensor parallelism by @amd-xiaoyu12 in #37719
* [cache] fix `HybridCache` init when `device` is passed by @gante in #37718
* `GPT2Model` StaticCache support by @poedator in #35761
* [generate] skip compilation on cpu offload by @gante in #37709
* updated hidden_features for FlaxDinov2SwiGLUFFN in Dinov2 by @premmurugan229 in #37747
* Fix qwen2_5 get_rope_index tensor device locations by @rphmeier in #37597
* [generate] fix default autocompile case on gpu by @gante in #37756
* Fix wrong input shapes in doc-string of models by @kkew3 in #37729
* Refine parameter type annotations by @flashJd in #37666
* Fix tied weight loading with TP and loading sub state_dicts by @Cyrilvallez in #37758
* Fix load of rng state for resuming training from checkpoint by @winglian in #37162
* Fix typos in comments by @co63oc in #37694
* [deps] pin max `torch` version by @gante in #37760
* Guard DeepSpeed imports by @lewtun in #37755
* Fix auto-round hfoption by @MekkCyber in #37759
* Update model card for Gemma by @afafelwafi in #37674
* 🌐 [i18n-KO] Translated `roberta.md` to Korean by @garongkim in #37069
* [causal mask] fix preparation with multi-gpu by @zucchini-nlp in #37612
* unpin pytest<8 by @ydshieh in #37768
* Align gpt2 mask preparation to #37612 by @Cyrilvallez in #37787
* Fix typos in strings and comments by @co63oc in #37784
* Fix tensor parallel with non-floating dtypes by @Cyrilvallez in #37790
* Force torch>=2.6 with torch.load to avoid vulnerability issue by @Cyrilvallez in #37785
* fix mpt test of different outputs from cuda by @jiqing-feng in #37691
* [i18n-KO] Translated `keypoint_detection.md` to Korean by @rlaalsrl0922 in #36649
* chore: update SigLIP2 model card by @saswatmeher in #37624
* fix performance issue in convert_ids_to_tokens by @martin-harmonic in #37773
* Fix error message in `hub.py` by @srai9 in #37796
* Gemma3 is Torch Exportable by @guangy10 in #37728
* Fix the fsdp config cannot work issue. by @yuanwu2017 in #37549
* Define warmup allocator for torchao quantization by @MekkCyber in #37764
* Fix typos in strings and comments by @co63oc in #37799
* [doc] fix the code examples in qwen doc by @jiangyukunok in #37803
* Fix: Correct tensor shape comment in Mamba modeling by @ShadyPi in #37801
* [RT-DETR] Improve docs by @NielsRogge in #37814
* FIX: Faulty PEFT tests by @BenjaminBossan in #37757
* Add Optional to remaining types by @cyyever in #37808
* Fix error of HPU TP by @yuanwu2017 in #37782
* change XLA deprecated api by @SunMarc in #37741
* [config] revert #37603 by @zucchini-nlp in #37821
* [modular] Fix the prefix-based renaming if the old and new model share a common name suffix by @Cyrilvallez in #37829
* [tests] fix flaky pattern in `test_generate_continue_from_past_key_values` by @gante in #37724
* [tests] reorganize cache tests and clean memory between tests by @gante in #37684
* Revert change that breaks on Torch 2.1 by @Rocketknight1 in #37531
* Fix check of unecessary packages (issue #37626) by @HichTala in #37825
* Fix cache get item return type hints by @ChengLyu in #37847
* Fix Bitnet tokenizer in pipeline by @MekkCyber in #37861
* docs: Details for ambigious channel dimension assignment by @yaner-here in #37600
* Processor chat template: pass custom kwargs by @pcuenca in #37852
* Add Intel Gaudi doc by @regisss in #37855
* 🌐 [i18n-KO] Translated `electra.md` to Korean by @Kim-Ju-won in #36763
* Update modeling_llama4.py by @monk1337 in #37841
* Skip is_flaky tests in the CI by @Rocketknight1 in #37723
* Allow override inputs to export recipe by @guangy10 in #37508
* enable internvl UTs on XPU by @yao-matrix in #37779
* Llama Guard updates by @pcuenca in #37872
* update Clean_up_tokenization_spaces typos. by @zhanluxianshen in #37865
* fix error for _register_pytree_node in torch2.1.0 and fix bf16 assertion in xpu and npu by @jiaqiw09 in #37839
* make sure lr is not a tensor by @winglian in #37881
* Fix qwen2-vl-docs. by @zhanluxianshen in #37879
* uniformize kwargs for VisionTextDualEncoder by @tibor-reiss in #34563
* Fix: reassign in qwen3 moe model by @linkedlist771 in #37848
* update comment in image_processing_base.py to reference image_process… by @arjunaskykok in #37864
* Support FlaxPreTrainedModel to load model checkpoint from local subfolder safetensors by @Melody-coder923 in #37732
* [tests] Test all cache implementations by @gante in #37873
* [tests] reset logs in `torch.compile` test by @gante in #37894
* Fix Qwen3 tp plan with FP8 by @MekkCyber in #37871
* Enhance documentation to explain chat-based few-shot prompting by @MostHumble in #37828
* Support `AOPerModuleConfig` and `include_embedding` by @jerryzh168 in #37802
* fixed gemma3 collection path pointing to llama 2 collection. by @dmgcsilva in #37899
* Fix typos in strings and comments by @co63oc in #37910
* Improve performance of `load_state_dict` by @woct0rdho in #37902
* 🌐 [i18n-KO] Translated `gpu_selection.md` to Korean by @nsbg in #36757
* Add usage example for DINOv2 by @baldassarreFe in #37398
* Aligning modling code for GPT2 to work with vLLM (fallback) by @ariG23498 in #36934
* Break weight tying when quantizing input embedding by @jerryzh168 in #37905
* [docs] logits docstring by @gante in #37929
* [D-FINE] Update names by @NielsRogge in #37957
* More fault tolerant notification service by @ivarflakstad in #37924
* [core] reuse unused reserved cuda memory when loading models by @gante in #37920
* Use T4 single GPU runner with more CPU RAM by @ydshieh in #37961
* [generate] Fix `vocab_size` access for multimodal models by @kurzdev in #37937
* Fix incorrect type annotation in get_auxiliary_logits by @Tanuj-rai in #37955
* [Ready to Merge][HFQuantizer] Squelch pydantic warnings by @kylesayrs in #37726
* Add GraniteMoeHybrid support for 4.0 by @Ssukriti in #37658
* add xpu memory check by @faaany in #37969
* [tests] Smaller model in slow cache tests by @gante in #37922
* [llava] one pixel is missing from padding when length is odd by @cyr0930 in #37819
* add job links to new model failure report by @ydshieh in #37973
* fix docs serving typos. by @zhanluxianshen in #37936
* Small typo lines 47 and 199 perf_infer_gpu_one.md by @nlhmnlhmnlhm in #37938
* Fix typos by @omahs in #37978
* [speech2text] fix init of sinusoidal embeddings by @gante in #37931
* Fix typo by @lkm2835 in #37964
* enable xpu in test_trainer by @yao-matrix in #37774
* fix FSDP + torch.compile bug when saving pretrained model by @Joaquinecc in #37725
* Enable granite speech 3.3 tests by @alex-jw-brooks in #37560
* Fix donut backtracking by @Rocketknight1 in #37788
* Fix Qwen models export with torch 2.7 by @guangy10 in #37985
* [offload] respect `max_memory` argument when factoring in unused reserved memory by @gante in #37982
* make aya vision 5 integration tests pass on xpu by @yao-matrix in #37990
* [chat template] separate jinja logic from tokenizers by @zucchini-nlp in #37602
* remove duplicate code by @kaixuanliu in #37991
* Add a check to import_utils.py to allow for use of faiss_gpu installation by @Fiona-Waters in #37997
* [CSM] tiny fix on generation by @eustlb in #38001
* Fix `pad` image transform for batched inputs by @sebasv in #37544
* Add ALL_ATTENTION_FUNCTIONS compatibility for Pixtral model by @uminaty in #37960
* Enable RUF013 to enforce optional typing by @cyyever in #37266
* Fix `Optional` typing by @qubvel in #38018
* Print commit SHA on slack message for new model notification. by @ydshieh in #38019
* [CI] remove duplicated message on GH comment to run slow tests by @gante in #37970
* [caches] Raise exception on offloaded static caches + multi device by @gante in #37974
* Skip `test_push_to_hub_with_saves_each_epoch` for now by @ydshieh in #38022
* Fix incorrect installation instructions (for issue #37476) by @Zephyr271828 in #37640
* Fix wording in `torchscript.md` by @Madghostek in #38004
* [VLMs] support attention backends by @zucchini-nlp in #37576
* make `test_speculative_decoding_non_distil` device-agnostic by @faaany in #38010
* enable mamba2 integration cases on xpu by @yao-matrix in #38006
* update bnb tests by @jiqing-feng in #38011
* [`AutoDocstring`] Based on inspect parsing of the signature by @ArthurZucker and @yonigozlan in #33771
* fix document masking for chunked attention by @winglian in #37429
* make mistral3 pass on xpu by @yao-matrix in #37882
* enable utils test cases on XPU by @yao-matrix in #38005
* [Temporary] Log some information in some pytest/pluggy internal places by @ydshieh in #37996
* Trigger CircleCI via GitHub Actions when `ready for review` by @ydshieh in #37885
* Disable `Trigger CircleCI via GitHub Actions when `ready for review` by @ydshieh in #38038
* Do not erase a cache_position passed explicitly to generate(), if there is one by @FremyCompany in #37986
* Support for version spec in requires & arbitrary mismatching depths across folders by @LysandreJik in #37854
* Re-Enable `Trigger CircleCI via GitHub Actions when "ready for review" by @ydshieh in #37885)`
* Fix reduce-labels in BEIT Fast Image Processor by @simonreise in #38042
* Fix cache update! by @Cyrilvallez in #38046
* Fix linalg.norm for CovnNextV2 by @qubvel in #38015
* enable generation fsdp/utils cases on XPU by @yao-matrix in #38009
* fix(conversion): Fix size mismatch error during TF->PT model loading by @arjunaskykok in #38014
* [VLM] fix loading issues by @zucchini-nlp in #38051
* Fix OneFormer integration test by @qubvel in #38016
* Add AMD expectation to test_gpt2_sample by @ivarflakstad in #38079
* docs: fix md style by @imba-tjd in #38057
* Fix mt5 test on AMD devices by @ivarflakstad in #38081
* chore(qwen2): display warning log only when sliding window attention … by @edwardzjl in #36316
* fix the inconsist docstring in apply_chat_template by @lenijwp in #38069
* Fix tot update in trainer by @efsotr in #37923
* update seed_worker to set seed based on worker_id and rank by @gathierry in #37980
* uninstall `kernels` from docker images by @ydshieh in #38083
* Refactor image processor phi4 by @yonigozlan in #36976
* update `require_read_token` by @ydshieh in #38093
* add timeout for downloading the `librispeech_asr` dataset by @faaany in #38073
* fix: Propagate `lr_scheduler_kwargs` options to create LR Scheduler when LayerWiseDummyOptimizer is used by @BlackNoodle in #34559
* Disable report callbacks for certain training tests by @ivarflakstad in #38088
* [smolvlm] skip the test by @zucchini-nlp in #38099
* Fix bug in prefill_chunk_size that ignores disable_compile flag by @xmarva in #38067
* Fix `past_key_values` type hint in model output types by @ChengLyu in #37953
* [bug] fix llava processor to calculate unpadding size correctly by @cyr0930 in #37988
* fix `check_bad commit.py` gives wrong results by @ydshieh in #38107
* Fix InternVL interpolate_pos_encoding and add to video_processing_auto by @yonigozlan in #38092
* [CSM] update test for t4 runners by @eustlb in #38110
* Add style bot by @SunMarc in #38102
* Fix description and formatting errors in code docs by @bilibili12433014 in #38074
* enable finegrained_fp8 and granite_speech cases on XPU by @yao-matrix in #38036
* [video processor] fix tests by @zucchini-nlp in #38104
* Fix temporal padding in Qwen2VLImageProcessor when the number of frames is not divisible by temporal_patch_size by @ritwickchaudhry in #38076
* Fix auto batch size finder test by @ivarflakstad in #38125
* Add config validation and style tweaks by @Kirire in #37589
* Update trainer.md by @guspuffygit in #38113
* [docs] add uv installation instructions for source builds by @arjunaskykok in #37968
* Add `manueldeprada` to `run_slow` whitelist by @manueldeprada in #38126
* enable d_fine finetuning properly by @SangbumChoi in #37962
* Fix incorrect attention mask truncate in WhisperFlashAttention2 by @OliBomby in #36477
* [Qwen3] Qwen3 MoE add tp plan for expert mlps by @hgt312 in #38135
* enable csm integration cases on xpu, all passed by @yao-matrix in #38140
* Remove head mask in generative models by @zucchini-nlp in #35786
* Hotfix: Flash Attention 2 support in Pixtral by @uminaty in #38146
* enable trainer test cases on xpu by @yao-matrix in #38138
* disable deepspeed when setting up fake trainer by @winglian in #38101
* Omit creation of positional IDs within ESM if applicable by @simonlevine in #38089
* [FIX] Save speed metrics to logs by @pavelgein in #38136
* enable autoround cases on XPU by @yao-matrix in #38167
* Include output embedding as well with `include_embedding` flag by @jerryzh168 in #37935
* Fix Qwen2.5 Omni `SinusoidsPositionEmbedding` precision by @BakerBunker in #38151
* Add optional RMSNorm support to BitNet quantization (config + layers) by @Codys12 in #38087
* [VLMs] add helpers to get multimodal encodings by @zucchini-nlp in #37743
* Bart: new cache format by @zucchini-nlp in #35314
* clean autoawq cases on xpu by @yao-matrix in #38163
* Disable `Trigger CircleCI by ready for review` by @ydshieh in #38171
* Disable `convert to draft` workflow by @ydshieh in #38177
* remove some commands from `fetch_tests` CircleCI job by @ydshieh in #38176
* Feat: add warnings for unused keys and rules in tensor parallel by @S1ro1 in #37893
* [ESM] Add flash-attention-2 backend for ESM-2 by @pstjohn in #38023
* Add args support for fast image processors by @yonigozlan in #37018
* Fix import torchao.prototype.low_bit_optim since torchao v0.11 by @baptxste in #38174
* fix bug in distributed loss test by @techkang in #38166
* [tests] remove `test_sdpa_equivalence` (redundant) by @gante in #37911
* Add Granite Speech Support by @alex-jw-brooks in #36801
* Add glm4 by @ArthurZucker in #37388
* Add Qwen2.5-Omni by @BakerBunker in #36752
* Add MLCD model by @tanhuajie in #36182
* Add TimesFM Time Series Forecasting Model by @jinan-zhou in #34082
* Add Janus model by @yaswanth19 in #36053
* Add InternVL (2.5 MPO) by @yonigozlan in #35968
* Add Bitnet model by @MekkCyber in #37742
* Samhq model addition by @sushmanthreddy in #35147
* Add D-FINE Model into Transformers by @VladOS95-cyber in #36261
* Add CSM model by @eustlb in #36719
## Significant community contributions
The following contributors have made significant changes to the library over the last release:
* @cyyever
* Use Python 3.9 syntax in examples (#37279)
* Use Python 3.9 syntax in tests (#37343)
* Remove old code for PyTorch, Accelerator and tokenizers (#37234)
* Add weights_only=True to torch.load (#37062)
* Add XPU case to is_torch_bf16_gpu_available (#37132)
* Remove `fsspec` dependency which isn't directly used by transformers (#37318)
* Add Optional to remaining types (#37808)
* Enable RUF013 to enforce optional typing (#37266)
* @yao-matrix
* enable 2 llama UT cases on xpu (#37126)
* enhance require_deterministic_for_xpu (#37437)
* make test_snowman_image_captioning pass on XPU, by sharing same atol w/ ROCM (#37480)
* fix and enhance pipeline_webserver.md (#36992)
* enable 5 cases on XPU (#37507)
* enable several cases on XPU (#37516)
* enable `test_offloaded_cache_implementation` on XPU (#37514)
* enable 3 mpt test cases on XPU (#37546)
* enable 6 rt_detr_v2 cases on xpu (#37548)
* enable 6 gemma2 cases on XPU (#37564)
* enable 6 modeling cases on XPU (#37571)
* fix 2 encoder_decoder issues on XPU (#37572)
* enable mllama cases on xpu (#37644)
* enable 6 granite cases on xpu (#37569)
* enable blip2 and emu3 cases on XPU (#37662)
* enable cpu offloading for Bark on xpu (#37599)
* enable 4 test_trainer cases on XPU (#37645)
* enable internvl UTs on XPU (#37779)
* enable xpu in test_trainer (#37774)
* make aya vision 5 integration tests pass on xpu (#37990)
* enable mamba2 integration cases on xpu (#38006)
* make mistral3 pass on xpu (#37882)
* enable utils test cases on XPU (#38005)
* enable generation fsdp/utils cases on XPU (#38009)
* enable finegrained_fp8 and granite_speech cases on XPU (#38036)
* enable csm integration cases on xpu, all passed (#38140)
* enable trainer test cases on xpu (#38138)
* enable autoround cases on XPU (#38167)
* clean autoawq cases on xpu (#38163)
* @alex-jw-brooks
* Expose blip2qformer (#37254)
* Add Granite Speech Support (#36801)
* Fix qwen2audio wanr -> warn (#37559)
* Allow Exclusion of Input IDs from RepetitionPenaltyLogitsProcessor (#37625)
* Enable granite speech 3.3 tests (#37560)
* @BakerBunker
* Add Qwen2.5-Omni (#36752)
* Fix inference bugs in Qwen2.5 Omni (#37701)
* Fix `embeds_to_talker` device in Qwen2.5-Omni (#37739)
* Fix Qwen2.5 Omni `SinusoidsPositionEmbedding` precision (#38151)
* @rootonchair
* Add Fast Image Processor for Perceiver (#37176)
* Add Fast Image Processor for Flava (#37135)
* Add Fast Image Processor for LayoutLMv2 (#37203)
* Add Fast Image Processor for LayoutLMv3 (#37201)
* Add Fast Image Processor for Donut (#37081)
* Bridgetower fast image processor (#37373)
* Add Fast Image Processor for PoolFormer (#37182)
* @flukeskywalker
* Fix mask handling for flex attention in llama/gemma2/mistral/qwen2 (#37381)
* @keetrap
* Add Fast LeViT Processor (#37154)
* Add Fast Mobilenet-V2 Processor (#37113)
* Add Fast owlvit Processor (#37164)
* Add Fast Yolos Processor (#37292)
* Add Fast Chinese-CLIP Processor (#37012)
* Add Fast Conditional-DETR Processor (#37071)
* Add Fast Grounding-Dino Processor (#37108)
* Add Fast PVT Processor (#37204)
* @tanhuajie
* Add MLCD model (#36182)
* @jinan-zhou
* Add TimesFM Time Series Forecasting Model (#34082)
* @yaswanth19
* Add Janus model (#36053)
* @saswatmeher
* chore: update model card for SigLIP (#37585)
* chore: update SigLIP2 model card (#37624)
* @cyr0930
* [fix] make legacy bnb code work (#37331)
* [llava] one pixel is missing from padding when length is odd (#37819)
* [bug] fix llava processor to calculate unpadding size correctly (#37988)
* @wenhuach21
* Add AutoRound quantization support (#37393)
* @devxaitist
* 🌐 [i18n-KO] Translated `siglip.md` to Korean (#37145)
* Add Fast Image Processor for vilt (#37304)
* @co63oc
* Fix typos in comments (#37694)
* Fix typos in strings and comments (#37784)
* Fix typos in strings and comments (#37799)
* Fix typos in strings and comments (#37910)
* @guangy10
* Gemma3 is Torch Exportable (#37728)
* Allow override inputs to export recipe (#37508)
* Fix Qwen models export with torch 2.7 (#37985)
* @sushmanthreddy
* Samhq model addition (#35147)
* @VladOS95-cyber
* Add D-FINE Model into Transformers (#36261)
* @Ssukriti
* Add GraniteMoeHybrid support for 4.0 (#37658)
| 2025-05-21T00:24:22.023993 |
huggingface | transformers | v4.52.2 | Patch release v4.52.2 | 2025-05-21T13:26:35+00:00 | # Patch release v4.52.2
We had to revert #37877 because of a missing flag that was overriding the device map. We re-introduced the changes because they allow native 3D parallel training in Transformers. Sorry everyone for the troubles! 🤗
* Clearer error on import failure (#38257) by @LysandreJik
* Verified tp plan should not be NONE (#38255) by @NouamaneTazi and @ArthurZucker | 2025-05-22T00:23:56.695324 |
huggingface | transformers | v4.52.3 | Patch release v4.52.3 | 2025-05-22T14:30:01+00:00 | # Patch release v4.52.3
We had to protect the imports again, a series of bad events.
Here are the two prs for the patch:
- Fix tp error when torch distributed is already initialized (#38294) by @SunMarc
- Protect ParallelInterface (#38262) by @ArthurZucker and @LysandreJik | 2025-05-23T00:24:19.831858 |
huggingface | transformers | v4.52.4 | Patch release: v4.52.4 | 2025-05-30T09:15:08+00:00 | The following commits are included in that patch release:
- [qwen-vl] Look for vocab size in text config (#38372)
- Fix convert to original state dict for VLMs (#38385)
- [video utils] group and reorder by number of frames (#38374)
- [paligemma] fix processor with suffix (#38365)
- Protect get_default_device for torch<2.3 (#38376)
- [OPT] Fix attention scaling (#38290) | 2025-05-31T00:23:38.325543 |
huggingface | transformers | v4.52.4-ColQwen2-preview | ColQwen2 (based on v4.52.4) | 2025-06-02T13:03:59+00:00 | A new model is added to transformers: ColQwen2
It is added on top of the v4.52.4 release, and can be installed from the following tag: `v4.52.4-ColQwen2-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the ColQwen2 model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.53.0`.
## ColQwen2

[ColQwen2](https://doi.org/10.48550/arXiv.2407.01449) is a variant of the [ColPali](https://github.com/huggingface/transformers/blob/c72ba6944171e2e6dd4f4a93d61b24fa52b718f5/docs/source/en/model_doc/colpali) model designed to retrieve documents by analyzing their visual features. Unlike traditional systems that rely heavily on text extraction and OCR, ColQwen2 treats each page as an image. It uses the [Qwen2-VL](https://github.com/huggingface/transformers/blob/c72ba6944171e2e6dd4f4a93d61b24fa52b718f5/docs/source/en/model_doc/qwen2_vl) backbone to capture not only text, but also the layout, tables, charts, and other visual elements to create detailed multi-vector embeddings that can be used for retrieval by computing pairwise late interaction similarity scores. This offers a more comprehensive understanding of documents and enables more efficient and accurate retrieval.
## Usage example
ColQwen2 can be found on the [Huggingface Hub](https://huggingface.co/models?other=colqwen2).
```python
import requests
import torch
from PIL import Image
from transformers import ColQwen2ForRetrieval, ColQwen2Processor
from transformers.utils.import_utils import is_flash_attn_2_available
# Load the model and the processor
model_name = "vidore/colqwen2-v1.0-hf"
model = ColQwen2ForRetrieval.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto", # "cpu", "cuda", or "mps" for Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else "sdpa",
)
processor = ColQwen2Processor.from_pretrained(model_name)
# The document page screenshots from your corpus
url1 = "https://upload.wikimedia.org/wikipedia/commons/8/89/US-original-Declaration-1776.jpg"
url2 = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Romeoandjuliet1597.jpg/500px-Romeoandjuliet1597.jpg"
images = [
Image.open(requests.get(url1, stream=True).raw),
Image.open(requests.get(url2, stream=True).raw),
]
# The queries you want to retrieve documents for
queries = [
"When was the United States Declaration of Independence proclaimed?",
"Who printed the edition of Romeo and Juliet?",
]
# Process the inputs
inputs_images = processor(images=images).to(model.device)
inputs_text = processor(text=queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**inputs_images).embeddings
query_embeddings = model(**inputs_text).embeddings
# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)
print("Retrieval scores (query x image):")
print(scores)
```
If you have issue with loading the images with PIL, you can use the following code to create dummy images:
```python
images = [
Image.new("RGB", (128, 128), color="white"),
Image.new("RGB", (64, 32), color="black"),
]
```
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
The example below uses [bitsandbytes](../quantization/bitsandbytes.md) to quantize the weights to int4.
```python
import requests
import torch
from PIL import Image
from transformers import BitsAndBytesConfig, ColQwen2ForRetrieval, ColQwen2Processor
model_name = "vidore/colqwen2-v1.0-hf"
# 4-bit quantization configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = ColQwen2ForRetrieval.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="cuda",
).eval()
processor = ColQwen2Processor.from_pretrained(model_name)
url1 = "https://upload.wikimedia.org/wikipedia/commons/8/89/US-original-Declaration-1776.jpg"
url2 = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Romeoandjuliet1597.jpg/500px-Romeoandjuliet1597.jpg"
images = [
Image.open(requests.get(url1, stream=True).raw),
Image.open(requests.get(url2, stream=True).raw),
]
queries = [
"When was the United States Declaration of Independence proclaimed?",
"Who printed the edition of Romeo and Juliet?",
]
# Process the inputs
inputs_images = processor(images=images, return_tensors="pt").to(model.device)
inputs_text = processor(text=queries, return_tensors="pt").to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**inputs_images).embeddings
query_embeddings = model(**inputs_text).embeddings
# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)
print("Retrieval scores (query x image):")
print(scores)
``` | 2025-06-03T00:24:46.050844 |
huggingface | transformers | v4.52.4-VJEPA-2-preview | V-JEPA 2 (based on v4.52.4) | 2025-06-11T14:55:13+00:00 | A new model is added to transformers: V-JEPA 2
It is added on top of the v4.52.4 release, and can be installed from the following tag: `v4.52.4-VJEPA-2-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the VJEPA-2 model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.53.0`.
## VJEPA-2
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vjepa.gif" alt="drawing" width="600"/>
</div>
V-JEPA 2 is a self-supervised approach to training video encoders developed by FAIR, Meta. Using internet-scale video data, V-JEPA 2 attains state-of-the-art performance on motion understanding and human action anticipation tasks. V-JEPA 2-AC is a latent action-conditioned world model post-trained from V-JEPA 2 (using a small amount of robot trajectory interaction data) that solves robot manipulation tasks without environment-specific data collection or task-specific training or calibration.
The abstract from the technical report is the following:
## Usage example
VJEPA-2 can be found on the [Huggingface Hub](https://huggingface.co/models?other=vjepa2). V-JEPA 2 is intended to represent any video (and image) to perform video classification, retrieval, or as a video encoder for VLMs.
The snippet below shows how to load the V-JEPA 2 model using the `AutoModel` class.
```py
import torch
from torchcodec.decoders import VideoDecoder
import numpy as np
processor = AutoVideoProcessor.from_pretrained("facebook/vjepa2-vitl-fpc64-256")
model = AutoModel.from_pretrained(
"facebook/vjepa2-vitl-fpc64-256",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
video_url = "https://huggingface.co/datasets/nateraw/kinetics-mini/resolve/main/val/archery/-Qz25rXdMjE_000014_000024.mp4"
vr = VideoDecoder(video_url)
frame_idx = np.arange(0, 64) # choosing some frames. here, you can define more complex sampling strategy
video = vr.get_frames_at(indices=frame_idx).data # T x C x H x W
video = processor(video, return_tensors="pt").to(model.device)
outputs = model(**video)
# V-JEPA 2 encoder outputs, same as calling `model.get_vision_features()`
encoder_outputs = outputs.last_hidden_state
# V-JEPA 2 predictor outputs
predictor_outputs = outputs.predictor_output.last_hidden_state
```
| 2025-06-12T00:24:17.251406 |
huggingface | transformers | v4.52.4-Kyutai-STT-preview | Kyutai-STT (based on v4.52.4) | 2025-06-24T16:02:59+00:00 | A new model is added to transformers: Kyutai-STT
It is added on top of the v4.52.4 release, and can be installed from the following tag: `v4.52.4-Kyutai-STT-preview`.
In order to install this version, please install with the following command:
```
pip install git+https://github.com/huggingface/[email protected]
```
If fixes are needed, they will be applied to this release; this installation may therefore be considered as stable and improving.
As the tag implies, this tag is a **_preview_** of the Kyutai-STT model. This tag is a tagged version of the `main` branch and does not follow semantic versioning. This model will be included in the next minor release: `v4.53.0`.
## Kyutai-STT
<img src="https://huggingface.co/datasets/eustlb/documentation-images/resolve/main/kyutai_stt.png"/>
Kyutai STT is a speech-to-text model architecture based on the [Mimi codec](https://huggingface.co/docs/transformers/en/model_doc/mimi), which encodes audio into discrete tokens in a streaming fashion, and a [Moshi-like](https://huggingface.co/docs/transformers/en/model_doc/moshi) autoregressive decoder. Kyutai’s lab has released two model checkpoints:
- [kyutai/stt-1b-en_fr](https://huggingface.co/kyutai/stt-1b-en_fr): a 1B-parameter model capable of transcribing both English and French
- [kyutai/stt-2.6b-en](https://huggingface.co/kyutai/stt-2.6b-en): a 2.6B-parameter model focused solely on English, optimized for maximum transcription accuracy
## Usage example
Kyutai-STT can be found on the [Huggingface Hub](https://huggingface.co/models?other=stt).
### Inference
```python
import torch
from datasets import load_dataset, Audio
from transformers import KyutaiSpeechToTextProcessor, KyutaiSpeechToTextForConditionalGeneration
# 1. load the model and the processor
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
model_id = "kyutai/stt-2.6b-en"
processor = KyutaiSpeechToTextProcessor.from_pretrained(model_id)
model = KyutaiSpeechToTextForConditionalGeneration.from_pretrained(model_id, device_map=torch_device)
# 2. load audio samples
ds = load_dataset(
"hf-internal-testing/librispeech_asr_dummy", "clean", split="validation"
)
ds = ds.cast_column("audio", Audio(sampling_rate=24000))
# 3. prepare the model inputs
inputs = processor(
ds[0]["audio"]["array"],
)
inputs.to(torch_device)
# 4. infer the model
output_tokens = model.generate(**inputs)
# 5. decode the generated tokens
print(processor.batch_decode(output_tokens, skip_special_tokens=True))
```
### Batched Inference
```python
import torch
from datasets import load_dataset, Audio
from transformers import KyutaiSpeechToTextProcessor, KyutaiSpeechToTextForConditionalGeneration
# 1. load the model and the processor
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
model_id = "kyutai/stt-2.6b-en"
processor = KyutaiSpeechToTextProcessor.from_pretrained(model_id)
model = KyutaiSpeechToTextForConditionalGeneration.from_pretrained(model_id, device_map=torch_device)
# 2. load audio samples
ds = load_dataset(
"hf-internal-testing/librispeech_asr_dummy", "clean", split="validation"
)
ds = ds.cast_column("audio", Audio(sampling_rate=24000))
# 3. prepare the model inputs
audio_arrays = [ds[i]["audio"]["array"] for i in range(4)]
inputs = processor(audio_arrays, return_tensors="pt", padding=True)
inputs = inputs.to(torch_device)
# 4. infer the model
output_tokens = model.generate(**inputs)
# 5. decode the generated tokens
decoded_outputs = processor.batch_decode(output_tokens, skip_special_tokens=True)
for output in decoded_outputs:
print(output)
``` | 2025-06-25T00:24:56.829953 |
huggingface | transformers | v4.53.0 | Release v4.53.0 | 2025-06-26T16:02:53+00:00 | ## Release v4.53.0
### Gemma3n
Gemma 3n models are designed for efficient execution on low-resource devices. They are capable of multimodal input, handling text, image, video, and audio input, and generating text outputs, with open weights for pre-trained and instruction-tuned variants. These models were trained with data in over 140 spoken languages.
Gemma 3n models use selective parameter activation technology to reduce resource requirements. This technique allows the models to operate at an effective size of 2B and 4B parameters, which is lower than the total number of parameters they contain. For more information on Gemma 3n's efficient parameter management technology, see the [Gemma 3n](https://ai.google.dev/gemma/docs/gemma-3n#parameters) page.

```python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
torch_dtype=torch.bfloat16,
model="google/gemma-3n-e4b",
device="cuda",
)
output = pipe(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",
text="<image_soft_token> in this image, there is"
)
print(output)
```
### Dia

Dia is an opensource text-to-speech (TTS) model (1.6B parameters) developed by [Nari Labs](https://huggingface.co/nari-labs).
It can generate highly realistic dialogue from transcript including nonverbal communications such as laughter and coughing.
Furthermore, emotion and tone control is also possible via audio conditioning (voice cloning).
**Model Architecture:**
Dia is an encoder-decoder transformer based on the original transformer architecture. However, some more modern features such as
rotational positional embeddings (RoPE) are also included. For its text portion (encoder), a byte tokenizer is utilized while
for the audio portion (decoder), a pretrained codec model [DAC](./dac.md) is used - DAC encodes speech into discrete codebook
tokens and decodes them back into audio.
* Add Dia model by @buttercrab in #38405
### Kyutai Speech-to-Text
<img src="https://huggingface.co/datasets/eustlb/documentation-images/resolve/main/kyutai_stt.png"/>
Kyutai STT is a speech-to-text model architecture based on the [Mimi codec](https://huggingface.co/docs/transformers/en/model_doc/mimi), which encodes audio into discrete tokens in a streaming fashion, and a [Moshi-like](https://huggingface.co/docs/transformers/en/model_doc/moshi) autoregressive decoder. Kyutai’s lab has released two model checkpoints:
- [kyutai/stt-1b-en_fr](https://huggingface.co/kyutai/stt-1b-en_fr): a 1B-parameter model capable of transcribing both English and French
- [kyutai/stt-2.6b-en](https://huggingface.co/kyutai/stt-2.6b-en): a 2.6B-parameter model focused solely on English, optimized for maximum transcription accuracy
* Add kyutai stt by @eustlb in #38909
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/stt)
### V-JEPA 2
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vjepa.gif" alt="drawing" width="600"/>
</div>
V-JEPA 2 is a self-supervised approach to training video encoders developed by FAIR, Meta. Using internet-scale video data, V-JEPA 2 attains state-of-the-art performance on motion understanding and human action anticipation tasks. V-JEPA 2-AC is a latent action-conditioned world model post-trained from V-JEPA 2 (using a small amount of robot trajectory interaction data) that solves robot manipulation tasks without environment-specific data collection or task-specific training or calibration.
* Add V-JEPA 2 by @qubvel in #38746
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/vjepa2).
### Arcee

Arcee is a decoder-only transformer model based on the Llama architecture with a key modification: it uses ReLU² (ReLU-squared) activation in the MLP blocks instead of SiLU, following recent research showing improved training efficiency with squared activations. This architecture is designed for efficient training and inference while maintaining the proven stability of the Llama design.
The Arcee model is architecturally similar to Llama but uses x * relu(x) in MLP layers for improved gradient flow and is optimized for efficiency in both training and inference scenarios.
* Add Arcee model support by @Crystalcareai in #38621
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/arcee#arcee).
### ColQwen2
[ColQwen2](https://doi.org/10.48550/arXiv.2407.01449) is a variant of the [ColPali](./colpali) model designed to retrieve documents by analyzing their visual features. Unlike traditional systems that rely heavily on text extraction and OCR, ColQwen2 treats each page as an image. It uses the [Qwen2-VL](./qwen2_vl) backbone to capture not only text, but also the layout, tables, charts, and other visual elements to create detailed multi-vector embeddings that can be used for retrieval by computing pairwise late interaction similarity scores. This offers a more comprehensive understanding of documents and enables more efficient and accurate retrieval.

* Add ColQwen2 to 🤗 transformers by @tonywu71 in #35778
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/colqwen2).
### MiniMax

MiniMax is a powerful language model with 456 billion total parameters, of which 45.9 billion are activated per token. To better unlock the long context capabilities of the model, MiniMax adopts a hybrid architecture that combines Lightning Attention, Softmax Attention and Mixture-of-Experts (MoE). Leveraging advanced parallel strategies and innovative compute-communication overlap methods—such as Linear Attention Sequence Parallelism Plus (LASP+), varlen ring attention, Expert Tensor Parallel (ETP), etc., MiniMax's training context length is extended to 1 million tokens, and it can handle a context of up to 4 million tokens during the inference. On various academic benchmarks, MiniMax also demonstrates the performance of a top-tier model.
The architecture of MiniMax is briefly described as follows:
- Total Parameters: 456B
- Activated Parameters per Token: 45.9B
- Number Layers: 80
- Hybrid Attention: a softmax attention is positioned after every 7 lightning attention.
- Number of attention heads: 64
- Attention head dimension: 128
- Mixture of Experts:
- Number of experts: 32
- Expert hidden dimension: 9216
- Top-2 routing strategy
- Positional Encoding: Rotary Position Embedding (RoPE) applied to half of the attention head dimension with a base frequency of 10,000,000
- Hidden Size: 6144
- Vocab Size: 200,064
For more details refer to the [release blog post](https://www.minimaxi.com/en/news/minimax-01-series-2).
* Add support for MiniMax's MiniMax-Text-01 by @geetu040 in #35831
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/minimax).
### Encoder-Decoder Gemma

T5Gemma (aka encoder-decoder Gemma) was proposed in a [research paper](https://arxiv.org/abs/2504.06225) by Google. It is a family of encoder-decoder large langauge models, developed by adapting pretrained decoder-only models into encoder-decoder. T5Gemma includes pretrained and instruction-tuned variants. The architecture is based on transformer encoder-decoder design following T5, with improvements from Gemma 2: GQA, RoPE, GeGLU activation, RMSNorm, and interleaved local/global attention.
T5Gemma has two groups of model sizes: 1) [Gemma 2](https://ai.google.dev/gemma/docs/core/model_card_2) sizes (2B-2B, 9B-2B, and 9B-9B), which are based on the offical Gemma 2 models (2B and 9B); and 2) [T5](https://arxiv.org/abs/1910.10683) sizes (Small, Base, Large, and XL), where are pretrained under the Gemma 2 framework following T5 configuration. In addition, we also provide a model at ML size (medium large, ~2B in total), which is in-between T5 Large and T5 XL.
The pretrained varaints are trained with two objectives: prefix language modeling with knowledge distillation (PrefixLM) and UL2, separately. We release both variants for each model size. The instruction-turned varaints was post-trained with supervised fine-tuning and reinforcement learning.
* Encoder-Decoder Gemma by @bzhangGo in #38332
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/t5gemma).
### GLM-4.1V
The GLM-4.1V model architecture is added to transformers; no models have yet been released with that architecture. Stay tuned for the GLM team upcoming releases!
* GLM-4.1V Model support by @zRzRzRzRzRzRzR in #38431
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/glm4v).
### Falcon H1

The FalconH1 model was developed by the TII Pretraining team. A comprehensive research paper covering the architecture, pretraining dynamics, experimental results, and conclusions is forthcoming. You can read more about this series in [this website](https://github.com/tiiuae/Falcon-H1).
* [MODEL] Add Falcon H1 by @younesbelkada in #38249
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/falcon_h1).
### LightGlue

The LightGlue model was proposed in [LightGlue: Local Feature Matching at Light Speed](https://arxiv.org/abs/2306.13643)
by Philipp Lindenberger, Paul-Edouard Sarlin and Marc Pollefeys.
Similar to [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor), this model consists of matching
two sets of local features extracted from two images, its goal is to be faster than SuperGlue. Paired with the
[SuperPoint model](https://huggingface.co/magic-leap-community/superpoint), it can be used to match two images and
estimate the pose between them. This model is useful for tasks such as image matching, homography estimation, etc.
The abstract from the paper is the following:
*We introduce LightGlue, a deep neural network that learns to match local features across images. We revisit multiple
design decisions of SuperGlue, the state of the art in sparse matching, and derive simple but effective improvements.
Cumulatively, they make LightGlue more efficient - in terms of both memory and computation, more accurate, and much
easier to train. One key property is that LightGlue is adaptive to the difficulty of the problem: the inference is much
faster on image pairs that are intuitively easy to match, for example because of a larger visual overlap or limited
appearance change. This opens up exciting prospects for deploying deep matchers in latency-sensitive applications like
3D reconstruction. The code and trained models are publicly available at this [https URL](https://github.com/cvg/LightGlue)*
* Add LightGlue model by @sbucaille in #31718
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/lightglue).
### dots.llm1
The abstract from the report is the following:
*Mixture of Experts (MoE) models have emerged as a promising paradigm for scaling language models efficiently by activating only a subset of parameters for each input token. In this report, we present dots.llm1, a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models while reducing training and inference costs. Leveraging our meticulously crafted and efficient data processing pipeline, dots.llm1 achieves performance comparable to Qwen2.5-72B after pretraining on high-quality corpus and post-training to fully unlock its capabilities. Notably, no synthetic data is used during pretraining. To foster further research, we open-source intermediate training checkpoints spanning the entire training process, providing valuable insights into the learning dynamics of large language models.*
* [Model] add dots1 by @redmoe-moutain in #38143
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/dots1).
### SmolLM3
SmolLM3 is a fully open, compact language model designed for efficient deployment while maintaining strong performance. It uses a Transformer decoder architecture with Grouped Query Attention (GQA) to reduce the kv cache, and no RoPE, enabling improved performance on long-context tasks. It is trained using a multi-stage training approach on high-quality public datasets across web, code, and math domains. The model is multilingual and supports very large context lengths. The instruct variant is optimized for reasoning and tool use.
* Add SmolLM3 by @anton-l in #38755
Read more about the model in the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/smollm3).
## Performance optimizations
### Kernels
In previous versions, installing the `kernels` library would **automatically activate the custom kernels** added to `transformers`, because the `@use_kernel_forward_from_the_hub` decorator directly swapped out the model’s forward method. This implicit behavior caused several issues for users — including problems with `torch.compile`, non-determinism, and inconsistent outputs.
To address this, we've introduced a new **opt-in mechanism** called `kernelize`. You can now enable kernel usage explicitly by passing `use_kernels=True` to `from_pretrained`. The `use_kernel_forward_from_the_hub` decorator now simply stores the kernel name that the user wants to use — and `kernelize` handles the rest under the hood.
#### Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-1B-Instruct",
torch_dtype=torch.bfloat16,
device_map="cuda",
use_kernels=True
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
input = "Hello"
input_ids = tokenizer(input, return_tensors="pt").to(model.device).input_ids
output = model.generate(input_ids, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
More kernels will be added over time — this will be a collaborative, community-driven effort to make transformers lighter and faster 🤗
* Add kernelize to transformers by @MekkCyber in #38205
### Flash Attention 3
Support for Flash Attention 3 is added across the most popular models.
* Support for Flash Attention 3 by @EduardDurech in #38972
## Notable repository maintenance & refactors
Several efforts refactoring the repository are happening in parallel. The direction is to greatly simplify the library, removing unnecessary codepaths. Whilst the efforts are spread across the library, they're particularly visible in each individual models; where non-modeling-specific code will be simplified and eventually removed.
We take the assumption that model-agnostic utilities shouldn't be in the modeling code. Things like the output of attentions, hidden states, router logits, are important for end-users but don't need to be explicitely displayed in the modeling code.
* Apply GradientCheckpointingLayer to the whole repo by @qubvel in #38913
* No more Tuple, List, Dict by @Rocketknight1 in #38797
* Deprecate TF + JAX by @Rocketknight1 in #38758
## Breaking changes
Several minimal breaking changes aiming to bring clearer defaults while greatly simplifying the library have been merged.
* 🔴 Update default `dtype` for pipelines to `auto` by @Vaibhavs10 in #38882
* 🚨🚨 Fix initialization of Mask2Former by @Cyrilvallez in #38864
* :rotating_light: :rotating_light: Inherited CausalLM Tests by @Rocketknight1 in #37590
* 🚨Early-error🚨 config will error out if `output_attentions=True` and the attn implementation is wrong by @ArthurZucker in #38288
* 🔴 [VLM] modeling updates by @zucchini-nlp in #38317
* :rotating_light: :rotating_light: Fix custom code saving by @Rocketknight1 in #37716
* 🚨🚨[core] Completely rewrite the masking logic for all attentions by @Cyrilvallez in #37866
* 🔴🔴🔴 [`Attention`] Refactor Attention Interface for Bart-based Models by @vasqu in #38108
* 🔴[`Attention`] Attention refactor for Whisper-based models by @vasqu in #38235
* Add CB by @ArthurZucker in #38085
## Bugfixes and improvements
* CI reporting improvements by @ydshieh in #38230
* Revert parallelism temporarily by @LysandreJik in #38240
* tp plan should not be NONE by @ArthurZucker in #38255
* [Falcon H1] Fix Typo in Integration Test by @dhiaEddineRhaiem in #38256
* [`compile`] re-enable for Qwen-VL models by @zucchini-nlp in #38127
* fix multi-image case for llava-onevision by @cyr0930 in #38084
* Add tearDown method to Quark to solve OOM issues by @MekkCyber in #38234
* Clearer error on import failure by @LysandreJik in #38257
* [whisper] small changes for faster tests by @gante in #38236
* Simplify DTensor Check for modeling_utils.py by @amd-xiaoyu12 in #38245
* Improve typing in TrainingArgument by @cyyever in #36944
* Fix: missing else branch to handle "--load_best_model_at_end" in training_args.py by @danielyxyang in #38217
* assign the correct torchao data layout for xpu by @jiqing-feng in #37781
* Remove Japanese sequence_classification doc and update references by @ritsumei-aoi in #38246
* Protect ParallelInterface by @ArthurZucker in #38262
* Update Model Card for Mamba by @ParagEkbote in #37863
* docs(swin): Update Swin model card to standard format by @BryanBradfo in #37628
* add XPU info print in print_env by @yao-matrix in #38282
* [whisper] move processor test into processor test file 🧹 by @gante in #38266
* [Whisper] handle deprecation of `forced_decoder_ids` by @gante in #38232
* add `liger-kernel` to docker file by @ydshieh in #38292
* Fix tp error when torch distributed is already initialized by @SunMarc in #38294
* More typing in src/transformers/training_args.py by @cyyever in #38106
* refine `transformers env` output by @yao-matrix in #38274
* Update CI Docker base image for AMD tests by @ahadnagy in #38261
* Fix HybridChunedCache & Llama4 by @Cyrilvallez in #38299
* Oups typo for HybridChunkedCache by @Cyrilvallez in #38303
* [Tests] Cleanup Janus Testcase by @yaswanth19 in #38311
* [emu3] fix conversion script by @zucchini-nlp in #38297
* Fix run_slow by @cyyever in #38314
* Fix typo: change 'env' to 'environment' in .circleci/config.yml by @AbdessamadEnabih in #38273
* Adds use_repr to model_addition_debugger_context by @RyanMullins in #37984
* [tf/flax] handle `forced_decoder_ids` deletion by @gante in #38316
* [Whisper + beam search] fix usage of `beam_indices` by @gante in #38259
* Expose AutoModelForTimeSeriesPrediction for import by @jinan-zhou in #38307
* [custom_generate] don't forward `custom_generate` and `trust_remote_code` by @gante in #38304
* add `vasqu` to `self-comment-ci.yml` by @ydshieh in #38324
* Fix some tests (especially compile with fullgraph=True on Python<3.11) by @Cyrilvallez in #38319
* [performance_optim] reduce frequency of declaring attention_mask in Ascend NPU flash attention by @FightingZhen in #38278
* refactor can_save_slow_tokenizer by @itazap in #37722
* [`FlexAttention`] Reenable flex for encoder-decoder and make the test more robust by @vasqu in #38321
* Enhance Model Loading By Providing Parallelism, Uses Optional Env Flag by @inf3rnus in #36835
* Use Gradient Checkpointing Layer in Jamba & Blip Related Models by @alex-jw-brooks in #38310
* Never fallback to eager implicitly by @Cyrilvallez in #38327
* Remove duplicate docstring: resample by @qqii in #38305
* Update BioGPT model card by @Aguedoom in #38214
* docs(swinv2): Update SwinV2 model card to new standard format by @BryanBradfo in #37942
* [docs]: update roformer.md model card by @KsuParkhamchuk in #37946
* new failure CI reports for all jobs by @ydshieh in #38298
* Hot fix for AMD CI workflow by @ydshieh in #38349
* Uninstall `kernels` for AMD docker images by @ydshieh in #38354
* [VLMs] add helpers for get/set embedding by @zucchini-nlp in #38144
* switch to device agnostic device calling for test cases by @yao-matrix in #38247
* [`OPT`] Fix attention scaling by @vasqu in #38290
* Fix all import errors based on older torch versions by @Cyrilvallez in #38370
* Fix incorrect batching audio index calculation for Phi-4-Multimodal by @Isotr0py in #38103
* Protect `get_default_device` for torch<2.3 by @Cyrilvallez in #38376
* [Falcon H1] Fix slow path forward pass by @dhiaEddineRhaiem in #38320
* Improved cache docs by @manueldeprada in #38060
* for now disable compile by @ArthurZucker in #38383
* Use one `utils/notification_service.py` by @ydshieh in #38379
* Better check in `initialize_weights` by @Cyrilvallez in #38382
* fix typos by @DeVikingMark in #38336
* fix typo: `tokenizer` -> `tokenize` by @foldl in #38357
* Stop TF weight rename reDOS by @Rocketknight1 in #38325
* [cli] cli usable without torch by @gante in #38386
* update gemma tests by @ydshieh in #38384
* Stop autoconverting custom code checkpoints by @Rocketknight1 in #37751
* Add AMD MI300 CI caller leveraging self-hosted runner scale set workflow in hf-workflows by @jitesh-gupta in #38132
* Fix image token mask in Gemma3 by @Cyrilvallez in #38295
* [transformers x vLLM] standardize processors by @zucchini-nlp in #37915
* [paligemma] fix processor with suffix by @zucchini-nlp in #38365
* [video utils] group and reorder by number of frames by @zucchini-nlp in #38374
* [aya vision] fix processor for vLLM by @zucchini-nlp in #38371
* guard size mismatch check to only quantized models by @SunMarc in #38397
* [chat] improvements for thinking models and reduce default verbosity by @gante in #38322
* Fix convert to original state dict for VLMs by @hiyouga in #38385
* [chat] use the checkpoint's `generation_config.json` as base parameterization by @gante in #38330
* Fix Qwen2.5-VL Video Processor by @yeliudev in #38366
* [CSM] infer codec model with no_grad + audio eos label by @eustlb in #38215
* Add report_repo_id to mi300 workflow by @ivarflakstad in #38401
* [CSM] update model id by @eustlb in #38211
* [cleanup] delete deprecated kwargs in qwen2_audio 🧹 by @gante in #38404
* [tests] remove overload for deleted test (`test_offloaded_cache_implementation`) by @gante in #37896
* [mllama] Allow `pixel_values` with `inputs_embeds` by @dxoigmn in #38334
* Update Model Card for Mamba-2 by @ParagEkbote in #37951
* Updated Zoedepth model card by @miniMaddy in #37898
* Updated BigBird Model card as per #36979. by @RogerSinghChugh in #37959
* Updated BERTweet model card. by @RogerSinghChugh in #37981
* New bart model card by @RogerSinghChugh in #37858
* Update granite.md by @Tanuj-rai in #37791
* Falcon-H1 - Fix auto_docstring and add can_return_tuple decorator by @yonigozlan in #38260
* Updated model card for OLMo2 by @andyvu923 in #38394
* Add mi300 to amd daily ci workflows definition by @ivarflakstad in #38415
* Change slack channel for mi250 CI by @ivarflakstad in #38410
* Fix an error in verify_tp_plan for keys without '.' by @liwii in #38420
* [qwen-vl] Look for vocab size in text config by @zucchini-nlp in #38372
* Update `CsmForConditionalGenerationIntegrationTest` by @ydshieh in #38424
* enable large_gpu and torchao cases on XPU by @yao-matrix in #38355
* Disable mi210 scheduled CI by @ivarflakstad in #38411
* Update error when using additional and/or masks by @Cyrilvallez in #38429
* Fix CircleCI not triggered when PR is opened from a branch of `huggingface/transformers` by @ydshieh in #38413
* make Llama4TextMoe forward more readable by @JJJYmmm in #37529
* [core] support tensor-valued _extra_state values in `from_pretrained` by @pstjohn in #38155
* Fix typo in tokenization_utils_base.py docstring by @cwngan in #38418
* Fix convert weights for InternVL by @yonigozlan in #38233
* Trigger doc-builder job after style bot by @ydshieh in #38398
* Remove redundant test_sdpa_equivalence test by @Rocketknight1 in #38436
* Fix MoE gradient test by @Rocketknight1 in #38438
* Fix `from_args_and_dict` ProcessorMixin by @yonigozlan in #38296
* Fix handling of slow/fast image processors in image_processing_auto.py by @yonigozlan in #38161
* Updated the Model docs - for the ALIGN model by @1himan in #38072
* Updated the model card for ViTMAE by @mreraser in #38302
* Model card for mobilenet v1 and v2 by @yuanjua in #37948
* Merge type hints from `microsoft/python-type-stubs` (post dropping support for Python 3.8) by @Avasam in #38335
* Fix GLM4 checkpoints by @ydshieh in #38412
* feat: add cache retention for requests by @McPatate in #38446
* [Tests] Clean up test cases for few models by @yaswanth19 in #38315
* Fix TypeError in save_pretrained error handling (fixes #38422) by @rahulrshetty45 in #38449
* Cleanup `BatchFeature` and `BatchEncoding` by @lgeiger in #38459
* Fix `Gemma3IntegrationTest` by @ydshieh in #38471
* [Qwen2.5-Omni] Fix dtype of cos,sin when used with flash attention by @HarryHsing in #38453
* fix: handle no scheduler passed by user by @McPatate in #38407
* make it go brrrr by @ArthurZucker in #38409
* Fix convert_internvl_weights_to_hf.py to support local paths by @xvyv99 in #38264
* Fix incorrect bbox_embed initialization when decoder_bbox_embed_share=False in GroundingDINO by @islemyakoubi in #38238
* [Tests] Reduced model size for albert-test model by @saqlain2204 in #38480
* Align TP check by @SunMarc in #38328
* protect dtensor import by @SunMarc in #38496
* [docs] add xpu environment variable for gpu selection by @faaany in #38194
* Remove deprecated use_flash_attention_2 parameter by @cyyever in #37131
* Fix setting FLASH_ATTENTION_DETERMINISTIC after importing by @HollowMan6 in #37185
* [seamless_m4t] Skip some tests when speech is not available by @remi-or in #38430
* Update Loss Functions to Accept Tensor num_items_in_batch by @NEREUScode in #38029
* [generate] add soft deprecations on custom generation methods by @gante in #38406
* [generate] move `SinkCache` to a `custom_generate` repo by @gante in #38399
* remove unhandled parameter by @itazap in #38145
* Fix amp deprecation issue by @SunMarc in #38100
* [flax/mistral] support sliding_window: null in config by @yiding in #37402
* Num parameters in model.safetensors.index.json by @LysandreJik in #38531
* Remove type annotation in Siglip Attention Module by @yaswanth19 in #38503
* Fix `Gemma2IntegrationTest` by @ydshieh in #38492
* Fix blip2 tests by @ydshieh in #38510
* [tests] expand flex-attn test for vision models by @zucchini-nlp in #38434
* Don't use default attn if pre-set in sub-config by @zucchini-nlp in #38526
* update emu3 test by @jiqing-feng in #38543
* Update docker image to use `av` by @ydshieh in #38548
* [bugfix] [WIP] fix apply_rotary_emb error on Ascend NPU by @FightingZhen in #38491
* [TP] Change command in tests to `python3` by @S1ro1 in #38555
* Explicitly setting encoding in tokenization_utils_base.py by @Muqi1029 in #38553
* Fix `utils/notification_service.py` by @ydshieh in #38556
* Name change AOPermod -> ModuleFqn by @drisspg in #38456
* Fix hqq issue by @SunMarc in #38551
* [docs] Format fix by @stevhliu in #38414
* [janus] Fix failing tests on mi3XX by @remi-or in #38426
* Fix `chameleon` tests by @ydshieh in #38565
* update `utils/notification_service.py` for AMD vs Nvidia by @ydshieh in #38563
* Fix `deepseekv3` by @ydshieh in #38562
* [`FlexAttn`] Fix models with unique characteristics by @vasqu in #38433
* fix(attention_visualizer): add default value for image_seq_length by @IceGiraffe in #38577
* allow custom head_dim for qwen2_moe by @bzantium in #37188
* Docs: fix code formatting in torchao docs by @Manalelaidouni in #38504
* feat: add `repository` field to benchmarks table by @McPatate in #38582
* [Dinov2] Enable device_map="auto" support by @aryanchauhan31 in #38487
* tests/roformer: fix couple roformer tests on gpus by @dvrogozh in #38570
* New gpt neo model card by @RogerSinghChugh in #38505
* Updated deprecated typing imports with equivalents for Python 3.9+ by @Sai-Suraj-27 in #38546
* added fast image processor for ZoeDepth and expanded tests accordingly by @henrikm11 in #38515
* [qwen-omni] fix sliding window by @zucchini-nlp in #38525
* Remove custom pytest and pluggy by @ydshieh in #38589
* pin pandas by @ydshieh in #38605
* Allow `mlm_probability` to be set to `None` when `mlm=False` in DataCollatorForLanguageModeling by @KameniAlexNea in #38522)
* Avoid overwrite existing local implementation when loading remote custom model by @Isotr0py in #38474
* fix spelling errors by @davidjsonn in #38608
* Remove `isort` from dependencies by @Sai-Suraj-27 in #38616
* Fix `return_dict=False` giving errors in a few VLM models by @ydshieh in #38519
* docs: fix dark mode logo display. by @johncaged in #38586
* Fix typo in LLaVa documentation by @mynameismon in #38618
* [Nit] Add Note on SigOpt being in Public Archive Mode by @ParagEkbote in #38610
* Updated Aria model card by @1himan in #38472
* Fix `MiniMax` (docs and integration tests checkpoint) by @geetu040 in #38575
* enable more test cases on xpu by @yao-matrix in #38572
* Improve `test_initialization` by @ydshieh in #38607
* Use torch 2.7.1 on CircleCI jobs by @ydshieh in #37856
* [generation] bring back tests on vision models by @zucchini-nlp in #38603
* update `ColQwen2ModelIntegrationTest` by @ydshieh in #38583
* Improve `test_initialization` for `SwiftFormer` by @ydshieh in #38636
* fix: support grad clipping for TP through replicating non-sharded modules by @kmehant in #36132
* Don't run `AriaForConditionalGenerationModelTest` on CircleCI by @ydshieh in #38615
* fix total batch size calculation in trainer by @inkcherry in #38286
* fix torch_dtype on awq by @jiqing-feng in #38463
* Better CI by @ydshieh in #38552
* remove ipex_optimize_model usage by @yao-matrix in #38632
* Skip torchscript tests for 2 models by @ydshieh in #38643
* Fix `InternVL` integration test by @ydshieh in #38612
* Use torch 2.7.1 on daily CI by @ydshieh in #38620
* Fix qwen2-audio chat template audio placeholder insertion by @Isotr0py in #38640
* Fixed modeling_auto.py MODEL_FOR_MASK_GENERATION_MAPPING_NAMES variable by @sbucaille in #38664
* fix: "check out" as verb by @DePasqualeOrg in #38678
* Fix attention mask expansion when converting to executorch by @pweglik in #38637
* Fix some models import by @nicelulu in #38694
* Fix retrieve function signature and remove faiss requirement by @Fiona-Waters in #38624
* Fix TypeError: 'NoneType' object is not iterable for esm by @dbleyl in #38667)
* Docs: update bitsandbytes torch.compile compatibility by @matthewdouglas in #38651
* Drop as_target_processor from the _call_ and pad methods by @marcndo in #38642
* Created model card for XLM model by @AshAnand34 in #38595
* Update XLM-RoBERTa model documentation with enhanced usage examples and improved layout by @AshAnand34 in #38596
* Created model card for xlm-roberta-xl by @AshAnand34 in #38597
* Fix `aya_vision` test by @ydshieh in #38674
* Standardize ByT5 model card format by @yanamis in #38699
* Fix smart resize by @rdonggroq in #38706
* Update some tests for torch 2.7.1 by @ydshieh in #38701
* Logging message for ``` is_bitsandbytes_available() ``` by @ved1beta in #38528
* Fix `llava` tests by @ydshieh in #38722
* Use OSError by @cyyever in #38712
* [add-new-model-like] Robust search & proper outer '),' in tokenizer mapping by @alexzms in #38703
* Fix typo in Language Modeling example scripts and update TPU type by @framoncg in #38652
* Add AGENTS.md by @Rocketknight1 in #38734
* New canine model card by @RogerSinghChugh in #38631
* Fixed a multiple-devices issue in SmolVLM model by @remi-or in #38736
* [llava] fix integration tests with Siglip by @zucchini-nlp in #38732
* fix: Add method to get image features in PaliGemmaForConditionalGeneration by @YushunXiang in #38730
* from 1.11.0, torchao.prototype.low_bit_optim is promoted to torchao.optim by @yao-matrix in #38689
* fix: bf16 with TPU is allowed in configuration by @yevvonlim in #38670
* [DeepSeek-V3] implement when q_lora_rank is None by @bzantium in #38743
* Revert "Trigger doc-builder job after style bot" by @ydshieh in #38735
* Add z-loss to Bamba for v2 by @daviswer in #37842
* Better typing for num_items_in_batch by @SunMarc in #38728
* Prepare for TF+Jax deprecation by @Rocketknight1 in #38760
* Remove IPEX requirement for bitsandbytes on CPU by @matthewdouglas in #38594
* Update repo consistency check by @Rocketknight1 in #38763
* fix(qwen3_moe): pass kwargs to self_attn by @llllvvuu in #38691
* Update pegasus model card by @dross20 in #38675
* Make style bot trigger CI after push by @ydshieh in #38754
* chore(pixtral): emit block attention mask when using flash attention by @starcatmeow in #38741
* Update altCLIP model card by @EmileAydar in #38306
* Add Qwen2 MoE model card by @rileyafox in #38649
* [masking utils] check `None` instead of try/except by @zucchini-nlp in #38561
* [Hotfix] Fix style bot by @ydshieh in #38779
* Fix masking utils by @Cyrilvallez in #38783
* [video processors] support frame sampling within processors by @zucchini-nlp in #38105
* Skip some export tests on torch 2.7 by @ydshieh in #38677
* Reduce verbosity for `average_tokens_across_devices=True` and `world size = 1` by @qgallouedec in #38785
* Update PULL_REQUEST_TEMPLATE.md by @qgallouedec in #38770
* [docs] Add int4wo + 2:4 sparsity example to TorchAO README by @jcaip in #38592
* Fix `qwen_2_5 omni` by @ydshieh in #38658
* Fix `llava_onevision` tests by @ydshieh in #38791
* Reword README in light of model definitions by @LysandreJik in #38762
* Fix Typos in Comments: "quantitation" → "quantization", "averege" → "average" by @leopardracer in #38766
* Initialize flash attn flag by @farnasirim in #38768
* Fix `mllama` by @ydshieh in #38704
* build: :pushpin: Remove upper bound on PyTorch by @KyleMylonakisProtopia in #38789
* Remove all traces of `low_cpu_mem_usage` by @Cyrilvallez in #38792
* [Docs] New DiT model card by @yushi2006 in #38721
* Add missing div in Pegasus model card by @dross20 in #38773
* Updated moonshine modelcard by @SohamPrabhu in #38711
* refactor create_token_type_ids_from_sequences by @itazap in #37681
* [docs] update cache docs with new info by @zucchini-nlp in #38775
* Fix erroneous docstring for the ordering of SWA layers by @norpadon in #38794
* Fix configs and doc for the Qwens by @Cyrilvallez in #38808
* Unbreak optimum-executorch by @guangy10 in #38646
* Disable custom MRA kernels for ROCm by @ahadnagy in #38738
* Use HF papers by @qgallouedec in #38184
* Simplify and update trl examples by @qgallouedec in #38772
* Better pipeline type hints ✨ by @qubvel in #38049
* Fix `llava_next` tests by @ydshieh in #38813
* Expectation fixes and added AMD expectations by @remi-or in #38729
* Use `wandb.run.url` instead of `wandb.run.get_url()` (deprecated) by @qgallouedec in #38817
* Refactor DBRX tests to use CausalLMModelTest base classes by @Rocketknight1 in #38475
* change fsdp_strategy to fsdp in TrainingArguments in accelerate doc by @PT-10 in #38807
* Fix a minor security issue by @ydshieh in #38815
* Fix trainer.py not showing signature columns by @nenesekai in #38465
* Add V-JEPA for video classification model by @qubvel in #38788
* fixed docstring in modular_qwen2_5_vl.py by @lawrencefeng17 in #38798
* [docs] Update docs moved to the course by @stevhliu in #38800
* [docs] updated roberta model card by @allmight05 in #38777
* Updated Albert model Card by @souvikchand in #37753
* [internvl] fix video inference by @zucchini-nlp in #38811
* Fix redundant code in Janus by @yaswanth19 in #38826
* bugfix: propage weight key_mapping to peft to fix 3.52 VLM renaming by @ManuelFay in #38627
* Fix peft integration by @Cyrilvallez in #38841
* Fix broken notebooks link in Italian training docs by @VolodymyrBg in #38834
* Fix broken tag in Longformer model card by @dross20 in #38828
* [BugFix] QA pipeline edge case: `align_to_words=True` in `QuestionAnsweringPipeline` can lead to duplicate answers by @yushi2006 in #38761
* GraniteMoeHybrid: Allow for only shared expert case. by @shawntan in #38801
* Updated aya_vision.md by @1himan in #38749
* Remove merge conflict artifacts in Albert model doc by @druvdub in #38849
* [video processor] fix BC when no video config if found by @zucchini-nlp in #38840
* Fix incorrect width ratio calculation in Llama4 image processor by @Jingxiang-Zhang in #38842
* Allow customization of sdpa in executorch.py by @kimishpatel in #38827
* Fix `qwen2_5_vl` tests by @ydshieh in #38845
* Improve `auxiliary_in_channels` default behavior in UperNet by @simonreise in #37540
* Fix `qwen3` tests by @ydshieh in #38862
* Update CvT documentation with improved usage examples and additional … by @sezan92 in #38731
* Update roc bert docs by @SohamPrabhu in #38835
* Post-PR fixes! by @Rocketknight1 in #38868
* enable misc test cases on XPU by @yao-matrix in #38852
* Fix `phi4_multimodal` tests by @ydshieh in #38816
* Fix `qwen3_moe` tests by @ydshieh in #38865
* Fix HQQ model param device transfer issue by @HighCWu in #38466
* Fixed markdown for BertTokenizer's '[CLS]' token. by @eu90h in #38506
* null deepspeed_plugin in args for wandb callback fake trainer by @winglian in #38867
* More PYUP fixes by @cyyever in #38883
* Fix loop var naming by @Rocketknight1 in #38885
* [bugfix] fix ATTN_MASK_NPU device mismatch error on multi-device NPU … by @qykong in #38876
* log: Add logging when using split_batches and per_device_train_batch_size by @KeshavSingh29 in #38633
* Docs: Add custom fine-tuning tutorial to TrOCR model page by @Ashutosh-4485 in #38847
* 36978 | Fast image processor for DPT model by @samrae7 in #37481
* [video processor] fix slow tests by @zucchini-nlp in #38881
* Update bamba model card by @druvdub in #38853
* Add support for specifying revisions when pushing to Hub via internal Trainer call by @IsaacBreen in #36852
* Use `raise from e` in `hub.py` utility by @Wauplin in #37241
* [phi-4] use mel filters from audio utils by @eustlb in #36966
* Fix `fsmt` tests by @ydshieh in #38904
* Fix unnecessary super calls by @cyyever in #38897
* align xpu's autocast behavior w/ cuda by using device agnostic torch APIs by @yao-matrix in #38284
* Fix `FalconMambaIntegrationTests` by @ydshieh in #38566
* Skip sdpa tests if submodule does not support sdpa by @ivarflakstad in #38907
* Fix ReDOS in tokenizer digit substitution by @Rocketknight1 in #38844
* feat: Add granite architectures to auto tokenizer name mappings by @gabe-l-hart in #38802
* Allow make-fixup on main branch, albeit slowly by @Rocketknight1 in #38892
* feat: add flexible Liger Kernel configuration to TrainingArguments by @hamza-hcompany in #38911
* Remove deprecated classes in modeling_utils.py by @Cyrilvallez in #38919
* Skip some tests for now by @ydshieh in #38931
* Modernbert fixes by @remi-or in #38912
* add pytorch-xpu Dockerfile by @yao-matrix in #38875
* Remove `ALL_LAYERNORM_LAYERS` by @Cyrilvallez in #38922
* [static cache] fix device map per layer in VLMs by @zucchini-nlp in #38488
* Add kwargs for timm.create_model in TimmWrapper by @qubvel in #38860
* Pin PyTorch extras for AMD containers by @ahadnagy in #38941
* Correctly raise error for awq quantization by @Cyrilvallez in #38945
* Fix more flaky `test_initialization` by @ydshieh in #38932
* Switch to use A10 progressively by @ydshieh in #38936
* Fix custom generate from local directory by @manueldeprada in #38916
* Update blip model card by @devkade in #38513
* Gaudi3 CI by @IlyasMoutawwakil in #38790
* Fix DTensor import compatibility for PyTorch < 2.5 by @Benoqtr in #38836
* Fix(informer): Correct tensor shape for input_size=1 by @Flink-ddd in #38856
* [modular] CLI allows positional arguments, and more defaults names for the optional arg by @Cyrilvallez in #38979
* Remove dead protected imports by @Cyrilvallez in #38980
* Break tie in Expectations and gemma3 fixes by @remi-or in #38943
* Add Idefics2/3 and SmolVLM Fast image processors + improvements for fast image processors by @yonigozlan in #38157
* fix: add __bool__ operator to tokenizer to avoid bloated asserts by @kallewoof in #38899
* Add support for auto_docstring with model outputs by @yonigozlan in #38242
* fix `mistral` and `mistral3` tests by @ydshieh in #38978
* [Feature] Support `is_split_into_words` in the `TokenClassificationPipeline`. by @yushi2006 in #38818
* Fix `rag` by @ydshieh in #38585
* [docs] Typos - Single GPU efficient training features by @casinca in #38964
* [qwen] refactor attentions for vision/audio by @zucchini-nlp in #38930
* Removing extra space in large command for speech-pretraining example by @dggaytan in #38705
* [`Attention`] Small fix on output attentions by @vasqu in #38948
* Fixes for Arcee model by @Cyrilvallez in #39001
* Added scikit-learn to the example image-classification requirements.txt by @mylonjones in #37506
* Update attention_visualizer.py by @Tanuj-rai in #37860
* Skip non-selected experts for qwen3_moe by @seven-mile in #38133
* Fix undeterministic order in modular dependencies by @Cyrilvallez in #39005
* Granite speech - minor fixes to support training with the HF trainer by @avihu111 in #38833
* Fix bugs in DynamicCache by @tugsbayasgalan in #37880
* Update self-comment-ci.yml user list by @ivarflakstad in #39014
* Skip sdpa dispatch on flash test due to unsupported head dims by @ivarflakstad in #39010
* [HPU][Critical Issue Fix] ThreadPool instead of Pool for parallel pre-processing by @dsmertin in #39002
* Add Hugging Face authentication procedure for IDEs (PyCharm, VS Code,… by @marcndo in #38954
* [LightGlue] Fixed attribute usage from descriptor_dim to keypoint_detector_descriptor_dim by @sbucaille in #39021
* Add zero dim tensor check when using flash_attention by @ranzhejiang in #38280
* Fix graph break in torch.compile when using FA2 with attention_mask=None and batch size > 1 by @efsotr in #37332
* [AutoModelForMaskGeneration] Remove duplicate code by @NielsRogge in #38622
* [video processor] support torchcodec and decrease cuda memory usage by @zucchini-nlp in #38880
* Drop unnecessary tokens in GPT2Model generation by @null-pointer-access in #39016
* Fix the seamless_m4t cannot work on Gaudi by @yuanwu2017 in #38363
* fix: astronomical loss with ModernBERT when using gradient checkpointing by @umarbutler in #38982)
* fix gemma3 grad acc by @SunMarc in #37208
* Remove script datasets in tests by @lhoestq in #38940
* Fix grammatical error in models documentation by @marcndo in #39019
* refactor: remove custom BarkLayerNorm by @eginhard in #39003
* [Kyutai-STT] correct model type + model id by @eustlb in #39035
* Two ReDOS fixes by @Rocketknight1 in #39013
* [tests] remove TF tests (uses of `require_tf`) by @gante in #38944
* Granite speech speedup + model saving bugfix by @avihu111 in #39028
* Fix Bad Outputs in Fast Path for GraniteMoeHybrid by @alex-jw-brooks in #39033
## Significant community contributions
The following contributors have made significant changes to the library over the last release:
* @ydshieh
* CI reporting improvements (#38230)
* add `liger-kernel` to docker file (#38292)
* add `vasqu` to `self-comment-ci.yml` (#38324)
* new failure CI reports for all jobs (#38298)
* Hot fix for AMD CI workflow (#38349)
* Uninstall `kernels` for AMD docker images (#38354)
* Use one `utils/notification_service.py` (#38379)
* update gemma tests (#38384)
* Update `CsmForConditionalGenerationIntegrationTest` (#38424)
* Fix CircleCI not triggered when PR is opened from a branch of `huggingface/transformers` (#38413)
* Trigger doc-builder job after style bot (#38398)
* Fix GLM4 checkpoints (#38412)
* Fix `Gemma3IntegrationTest` (#38471)
* Fix `Gemma2IntegrationTest` (#38492)
* Fix blip2 tests (#38510)
* Update docker image to use `av` (#38548)
* Fix `utils/notification_service.py` (#38556)
* Fix `chameleon` tests (#38565)
* update `utils/notification_service.py` for AMD vs Nvidia (#38563)
* Fix `deepseekv3` (#38562)
* Remove custom pytest and pluggy (#38589)
* pin pandas (#38605)
* Fix `return_dict=False` giving errors in a few VLM models (#38519)
* Improve `test_initialization` (#38607)
* Use torch 2.7.1 on CircleCI jobs (#37856)
* update `ColQwen2ModelIntegrationTest` (#38583)
* Improve `test_initialization` for `SwiftFormer` (#38636)
* Don't run `AriaForConditionalGenerationModelTest` on CircleCI (#38615)
* Better CI (#38552)
* Skip torchscript tests for 2 models (#38643)
* Fix `InternVL` integration test (#38612)
* Use torch 2.7.1 on daily CI (#38620)
* Fix `aya_vision` test (#38674)
* Update some tests for torch 2.7.1 (#38701)
* Fix `llava` tests (#38722)
* Revert "Trigger doc-builder job after style bot" (#38735)
* Make style bot trigger CI after push (#38754)
* [Hotfix] Fix style bot (#38779)
* Skip some export tests on torch 2.7 (#38677)
* Fix `qwen_2_5 omni` (#38658)
* Fix `llava_onevision` tests (#38791)
* Fix `mllama` (#38704)
* Fix `llava_next` tests (#38813)
* Fix a minor security issue (#38815)
* Fix `qwen2_5_vl` tests (#38845)
* Fix `qwen3` tests (#38862)
* Fix `phi4_multimodal` tests (#38816)
* Fix `qwen3_moe` tests (#38865)
* Fix `fsmt` tests (#38904)
* Fix `FalconMambaIntegrationTests` (#38566)
* Skip some tests for now (#38931)
* Fix more flaky `test_initialization` (#38932)
* Switch to use A10 progressively (#38936)
* fix `mistral` and `mistral3` tests (#38978)
* Fix `rag` (#38585)
* @ArthurZucker
* tp plan should not be NONE (#38255)
* Protect ParallelInterface (#38262)
* Add CB (#38085)
* 🚨Early-error🚨 config will error out if `output_attentions=True` and the attn implementation is wrong (#38288)
* for now disable compile (#38383)
* make it go brrrr (#38409)
* @younesbelkada
* [MODEL] Add Falcon H1 (#38249)
* @cyr0930
* fix multi-image case for llava-onevision (#38084)
* @cyyever
* Improve typing in TrainingArgument (#36944)
* More typing in src/transformers/training_args.py (#38106)
* Fix run_slow (#38314)
* Remove deprecated use_flash_attention_2 parameter (#37131)
* Use OSError (#38712)
* More PYUP fixes (#38883)
* Fix unnecessary super calls (#38897)
* @ritsumei-aoi
* Remove Japanese sequence_classification doc and update references (#38246)
* @yao-matrix
* add XPU info print in print_env (#38282)
* refine `transformers env` output (#38274)
* switch to device agnostic device calling for test cases (#38247)
* enable large_gpu and torchao cases on XPU (#38355)
* enable more test cases on xpu (#38572)
* remove ipex_optimize_model usage (#38632)
* from 1.11.0, torchao.prototype.low_bit_optim is promoted to torchao.optim (#38689)
* enable misc test cases on XPU (#38852)
* align xpu's autocast behavior w/ cuda by using device agnostic torch APIs (#38284)
* add pytorch-xpu Dockerfile (#38875)
* @vasqu
* 🔴🔴🔴 [`Attention`] Refactor Attention Interface for Bart-based Models (#38108)
* [`FlexAttention`] Reenable flex for encoder-decoder and make the test more robust (#38321)
* [`OPT`] Fix attention scaling (#38290)
* 🔴[`Attention`] Attention refactor for Whisper-based models (#38235)
* [`FlexAttn`] Fix models with unique characteristics (#38433)
* [`Attention`] Small fix on output attentions (#38948)
* @itazap
* refactor can_save_slow_tokenizer (#37722)
* remove unhandled parameter (#38145)
* refactor create_token_type_ids_from_sequences (#37681)
* @eustlb
* [CSM] infer codec model with no_grad + audio eos label (#38215)
* [CSM] update model id (#38211)
* [phi-4] use mel filters from audio utils (#36966)
* Add kyutai stt (#38909)
* [Kyutai-STT] correct model type + model id (#39035)
* @RogerSinghChugh
* Updated BigBird Model card as per #36979. (#37959)
* Updated BERTweet model card. (#37981)
* New bart model card (#37858)
* New gpt neo model card (#38505)
* New canine model card (#38631)
* @1himan
* Updated the Model docs - for the ALIGN model (#38072)
* Updated Aria model card (#38472)
* Updated aya_vision.md (#38749)
* @Avasam
* Merge type hints from `microsoft/python-type-stubs` (post dropping support for Python 3.8) (#38335)
* @remi-or
* [seamless_m4t] Skip some tests when speech is not available (#38430)
* [janus] Fix failing tests on mi3XX (#38426)
* Fixed a multiple-devices issue in SmolVLM model (#38736)
* Expectation fixes and added AMD expectations (#38729)
* Modernbert fixes (#38912)
* Break tie in Expectations and gemma3 fixes (#38943)
* @tonywu71
* Add ColQwen2 to 🤗 transformers (#35778)
* @geetu040
* Add support for MiniMax's MiniMax-Text-01 (#35831)
* Fix `MiniMax` (docs and integration tests checkpoint) (#38575)
* @sbucaille
* Fixed modeling_auto.py MODEL_FOR_MASK_GENERATION_MAPPING_NAMES variable (#38664)
* Add LightGlue model (#31718)
* [LightGlue] Fixed attribute usage from descriptor_dim to keypoint_detector_descriptor_dim (#39021)
* @samrae7
* 36978 | Fast image processor for DPT model (#37481)
* @Crystalcareai
* Add Arcee model support (#38621)
* @zRzRzRzRzRzRzR
* GLM-4.1V Model support (#38431)
* @bzhangGo
* Encoder-Decoder Gemma (#38332)
* @redmoe-moutain
* [Model] add dots1 (#38143)
* @EduardDurech
* Support for Flash Attention 3 (#38972)
| 2025-06-27T00:24:55.226650 |
README.md exists but content is empty.
- Downloads last month
- 204