text
stringlengths 96
319k
| id
stringlengths 14
178
| metadata
dict |
---|---|---|
# DenseNet
**DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers.
The **DenseNet Blur** variant in this collection by Ross Wightman employs [Blur Pooling](http://www.paperswithcode.com/method/blur-pooling)
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('densenet121', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `densenet121`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('densenet121', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@article{DBLP:journals/corr/HuangLW16a,
author = {Gao Huang and
Zhuang Liu and
Kilian Q. Weinberger},
title = {Densely Connected Convolutional Networks},
journal = {CoRR},
volume = {abs/1608.06993},
year = {2016},
url = {http://arxiv.org/abs/1608.06993},
archivePrefix = {arXiv},
eprint = {1608.06993},
timestamp = {Mon, 10 Sep 2018 15:49:32 +0200},
biburl = {https://dblp.org/rec/journals/corr/HuangLW16a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
<!--
Type: model-index
Collections:
- Name: DenseNet
Paper:
Title: Densely Connected Convolutional Networks
URL: https://paperswithcode.com/paper/densely-connected-convolutional-networks
Models:
- Name: densenet121
In Collection: DenseNet
Metadata:
FLOPs: 3641843200
Parameters: 7980000
File Size: 32376726
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Block
- Dense Connections
- Dropout
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Kaiming Initialization
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- ImageNet
ID: densenet121
LR: 0.1
Epochs: 90
Layers: 121
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L295
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenet121_ra-50efcf5c.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 75.56%
Top 5 Accuracy: 92.65%
- Name: densenet161
In Collection: DenseNet
Metadata:
FLOPs: 9931959264
Parameters: 28680000
File Size: 115730790
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Block
- Dense Connections
- Dropout
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Kaiming Initialization
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- ImageNet
ID: densenet161
LR: 0.1
Epochs: 90
Layers: 161
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L347
Weights: https://download.pytorch.org/models/densenet161-8d451a50.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.36%
Top 5 Accuracy: 93.63%
- Name: densenet169
In Collection: DenseNet
Metadata:
FLOPs: 4316945792
Parameters: 14150000
File Size: 57365526
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Block
- Dense Connections
- Dropout
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Kaiming Initialization
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- ImageNet
ID: densenet169
LR: 0.1
Epochs: 90
Layers: 169
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L327
Weights: https://download.pytorch.org/models/densenet169-b2777c0a.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 75.9%
Top 5 Accuracy: 93.02%
- Name: densenet201
In Collection: DenseNet
Metadata:
FLOPs: 5514321024
Parameters: 20010000
File Size: 81131730
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Block
- Dense Connections
- Dropout
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Kaiming Initialization
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- ImageNet
ID: densenet201
LR: 0.1
Epochs: 90
Layers: 201
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L337
Weights: https://download.pytorch.org/models/densenet201-c1103571.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.29%
Top 5 Accuracy: 93.48%
- Name: densenetblur121d
In Collection: DenseNet
Metadata:
FLOPs: 3947812864
Parameters: 8000000
File Size: 32456500
Architecture:
- 1x1 Convolution
- Batch Normalization
- Blur Pooling
- Convolution
- Dense Block
- Dense Connections
- Dropout
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: densenetblur121d
Crop Pct: '0.875'
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L305
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenetblur121d_ra-100dcfbc.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 76.59%
Top 5 Accuracy: 93.2%
- Name: tv_densenet121
In Collection: DenseNet
Metadata:
FLOPs: 3641843200
Parameters: 7980000
File Size: 32342954
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- Convolution
- Dense Block
- Dense Connections
- Dropout
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
ID: tv_densenet121
LR: 0.1
Epochs: 90
Crop Pct: '0.875'
LR Gamma: 0.1
Momentum: 0.9
Batch Size: 32
Image Size: '224'
LR Step Size: 30
Weight Decay: 0.0001
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L379
Weights: https://download.pytorch.org/models/densenet121-a639ec97.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 74.74%
Top 5 Accuracy: 92.15%
--> | pytorch-image-models/hfdocs/source/models/densenet.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/densenet.mdx",
"repo_id": "pytorch-image-models",
"token_count": 4190
} |
# Instagram ResNeXt WSL
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) \\( C \\), as an essential factor in addition to the dimensions of depth and width.
This model was trained on billions of Instagram images using thousands of distinct hashtags as labels exhibit excellent transfer learning performance.
Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('ig_resnext101_32x16d', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `ig_resnext101_32x16d`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('ig_resnext101_32x16d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@misc{mahajan2018exploring,
title={Exploring the Limits of Weakly Supervised Pretraining},
author={Dhruv Mahajan and Ross Girshick and Vignesh Ramanathan and Kaiming He and Manohar Paluri and Yixuan Li and Ashwin Bharambe and Laurens van der Maaten},
year={2018},
eprint={1805.00932},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: IG ResNeXt
Paper:
Title: Exploring the Limits of Weakly Supervised Pretraining
URL: https://paperswithcode.com/paper/exploring-the-limits-of-weakly-supervised
Models:
- Name: ig_resnext101_32x16d
In Collection: IG ResNeXt
Metadata:
FLOPs: 46623691776
Parameters: 194030000
File Size: 777518664
Architecture:
- 1x1 Convolution
- Batch Normalization
- Convolution
- Global Average Pooling
- Grouped Convolution
- Max Pooling
- ReLU
- ResNeXt Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- IG-3.5B-17k
- ImageNet
Training Resources: 336x GPUs
ID: ig_resnext101_32x16d
Epochs: 100
Layers: 101
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8064
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L874
Weights: https://download.pytorch.org/models/ig_resnext101_32x16-c6f796b0.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 84.16%
Top 5 Accuracy: 97.19%
- Name: ig_resnext101_32x32d
In Collection: IG ResNeXt
Metadata:
FLOPs: 112225170432
Parameters: 468530000
File Size: 1876573776
Architecture:
- 1x1 Convolution
- Batch Normalization
- Convolution
- Global Average Pooling
- Grouped Convolution
- Max Pooling
- ReLU
- ResNeXt Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- IG-3.5B-17k
- ImageNet
Training Resources: 336x GPUs
ID: ig_resnext101_32x32d
Epochs: 100
Layers: 101
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8064
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Minibatch Size: 8064
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L885
Weights: https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 85.09%
Top 5 Accuracy: 97.44%
- Name: ig_resnext101_32x48d
In Collection: IG ResNeXt
Metadata:
FLOPs: 197446554624
Parameters: 828410000
File Size: 3317136976
Architecture:
- 1x1 Convolution
- Batch Normalization
- Convolution
- Global Average Pooling
- Grouped Convolution
- Max Pooling
- ReLU
- ResNeXt Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- IG-3.5B-17k
- ImageNet
Training Resources: 336x GPUs
ID: ig_resnext101_32x48d
Epochs: 100
Layers: 101
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8064
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L896
Weights: https://download.pytorch.org/models/ig_resnext101_32x48-3e41cc8a.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 85.42%
Top 5 Accuracy: 97.58%
- Name: ig_resnext101_32x8d
In Collection: IG ResNeXt
Metadata:
FLOPs: 21180417024
Parameters: 88790000
File Size: 356056638
Architecture:
- 1x1 Convolution
- Batch Normalization
- Convolution
- Global Average Pooling
- Grouped Convolution
- Max Pooling
- ReLU
- ResNeXt Block
- Residual Connection
- Softmax
Tasks:
- Image Classification
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Data:
- IG-3.5B-17k
- ImageNet
Training Resources: 336x GPUs
ID: ig_resnext101_32x8d
Epochs: 100
Layers: 101
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 8064
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L863
Weights: https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 82.7%
Top 5 Accuracy: 96.64%
-->
| pytorch-image-models/hfdocs/source/models/ig-resnext.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/ig-resnext.mdx",
"repo_id": "pytorch-image-models",
"token_count": 3233
} |
# Res2Net
**Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single residual block. This represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('res2net101_26w_4s', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `res2net101_26w_4s`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('res2net101_26w_4s', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@article{Gao_2021,
title={Res2Net: A New Multi-Scale Backbone Architecture},
volume={43},
ISSN={1939-3539},
url={http://dx.doi.org/10.1109/TPAMI.2019.2938758},
DOI={10.1109/tpami.2019.2938758},
number={2},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
publisher={Institute of Electrical and Electronics Engineers (IEEE)},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
year={2021},
month={Feb},
pages={652–662}
}
```
<!--
Type: model-index
Collections:
- Name: Res2Net
Paper:
Title: 'Res2Net: A New Multi-scale Backbone Architecture'
URL: https://paperswithcode.com/paper/res2net-a-new-multi-scale-backbone
Models:
- Name: res2net101_26w_4s
In Collection: Res2Net
Metadata:
FLOPs: 10415881200
Parameters: 45210000
File Size: 181456059
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net101_26w_4s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L152
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net101_26w_4s-02a759a1.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 79.19%
Top 5 Accuracy: 94.43%
- Name: res2net50_14w_8s
In Collection: Res2Net
Metadata:
FLOPs: 5403546768
Parameters: 25060000
File Size: 100638543
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_14w_8s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L196
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_14w_8s-6527dddc.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.14%
Top 5 Accuracy: 93.86%
- Name: res2net50_26w_4s
In Collection: Res2Net
Metadata:
FLOPs: 5499974064
Parameters: 25700000
File Size: 103110087
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_26w_4s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L141
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_4s-06e79181.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.99%
Top 5 Accuracy: 93.85%
- Name: res2net50_26w_6s
In Collection: Res2Net
Metadata:
FLOPs: 8130156528
Parameters: 37050000
File Size: 148603239
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_26w_6s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L163
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_6s-19041792.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.57%
Top 5 Accuracy: 94.12%
- Name: res2net50_26w_8s
In Collection: Res2Net
Metadata:
FLOPs: 10760338992
Parameters: 48400000
File Size: 194085165
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_26w_8s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L174
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_8s-2c7c9f12.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 79.19%
Top 5 Accuracy: 94.37%
- Name: res2net50_48w_2s
In Collection: Res2Net
Metadata:
FLOPs: 5375291520
Parameters: 25290000
File Size: 101421406
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_48w_2s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L185
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_48w_2s-afed724a.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.53%
Top 5 Accuracy: 93.56%
--> | pytorch-image-models/hfdocs/source/models/res2net.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/res2net.mdx",
"repo_id": "pytorch-image-models",
"token_count": 3952
} |
# (Tensorflow) EfficientNet CondConv
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use \\( 2^N \\) times more computational resources, then we can simply increase the network depth by \\( \alpha ^ N \\), width by \\( \beta ^ N \\), and image size by \\( \gamma ^ N \\), where \\( \alpha, \beta, \gamma \\) are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient \\( \phi \\) to uniformly scale network width, depth, and resolution in a principled way.
The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.
The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block).
This collection of models amends EfficientNet by adding [CondConv](https://paperswithcode.com/method/condconv) convolutions.
The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu).
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('tf_efficientnet_cc_b0_4e', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `tf_efficientnet_cc_b0_4e`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('tf_efficientnet_cc_b0_4e', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../training_script) for training a new model afresh.
## Citation
```BibTeX
@article{DBLP:journals/corr/abs-1904-04971,
author = {Brandon Yang and
Gabriel Bender and
Quoc V. Le and
Jiquan Ngiam},
title = {Soft Conditional Computation},
journal = {CoRR},
volume = {abs/1904.04971},
year = {2019},
url = {http://arxiv.org/abs/1904.04971},
archivePrefix = {arXiv},
eprint = {1904.04971},
timestamp = {Thu, 25 Apr 2019 13:55:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1904-04971.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<!--
Type: model-index
Collections:
- Name: TF EfficientNet CondConv
Paper:
Title: 'CondConv: Conditionally Parameterized Convolutions for Efficient Inference'
URL: https://paperswithcode.com/paper/soft-conditional-computation
Models:
- Name: tf_efficientnet_cc_b0_4e
In Collection: TF EfficientNet CondConv
Metadata:
FLOPs: 224153788
Parameters: 13310000
File Size: 53490940
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- CondConv
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_cc_b0_4e
LR: 0.256
Epochs: 350
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 2048
Image Size: '224'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1561
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_4e-4362b6b2.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.32%
Top 5 Accuracy: 93.32%
- Name: tf_efficientnet_cc_b0_8e
In Collection: TF EfficientNet CondConv
Metadata:
FLOPs: 224158524
Parameters: 24010000
File Size: 96287616
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- CondConv
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_cc_b0_8e
LR: 0.256
Epochs: 350
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 2048
Image Size: '224'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1572
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_8e-66184a25.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.91%
Top 5 Accuracy: 93.65%
- Name: tf_efficientnet_cc_b1_8e
In Collection: TF EfficientNet CondConv
Metadata:
FLOPs: 370427824
Parameters: 39720000
File Size: 159206198
Architecture:
- 1x1 Convolution
- Average Pooling
- Batch Normalization
- CondConv
- Convolution
- Dense Connections
- Dropout
- Inverted Residual Block
- Squeeze-and-Excitation Block
- Swish
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Label Smoothing
- RMSProp
- Stochastic Depth
- Weight Decay
Training Data:
- ImageNet
ID: tf_efficientnet_cc_b1_8e
LR: 0.256
Epochs: 350
Crop Pct: '0.882'
Momentum: 0.9
Batch Size: 2048
Image Size: '240'
Weight Decay: 1.0e-05
Interpolation: bicubic
RMSProp Decay: 0.9
Label Smoothing: 0.1
BatchNorm Momentum: 0.99
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1584
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b1_8e-f7c79ae1.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 79.33%
Top 5 Accuracy: 94.37%
-->
| pytorch-image-models/hfdocs/source/models/tf-efficientnet-condconv.mdx/0 | {
"file_path": "pytorch-image-models/hfdocs/source/models/tf-efficientnet-condconv.mdx",
"repo_id": "pytorch-image-models",
"token_count": 3326
} |
""" Quick n Simple Image Folder, Tarfile based DataSet
Hacked together by / Copyright 2019, Ross Wightman
"""
import io
import logging
from typing import Optional
import torch
import torch.utils.data as data
from PIL import Image
from .readers import create_reader
_logger = logging.getLogger(__name__)
_ERROR_RETRY = 50
class ImageDataset(data.Dataset):
def __init__(
self,
root,
reader=None,
split='train',
class_map=None,
load_bytes=False,
input_img_mode='RGB',
transform=None,
target_transform=None,
**kwargs,
):
if reader is None or isinstance(reader, str):
reader = create_reader(
reader or '',
root=root,
split=split,
class_map=class_map,
**kwargs,
)
self.reader = reader
self.load_bytes = load_bytes
self.input_img_mode = input_img_mode
self.transform = transform
self.target_transform = target_transform
self._consecutive_errors = 0
def __getitem__(self, index):
img, target = self.reader[index]
try:
img = img.read() if self.load_bytes else Image.open(img)
except Exception as e:
_logger.warning(f'Skipped sample (index {index}, file {self.reader.filename(index)}). {str(e)}')
self._consecutive_errors += 1
if self._consecutive_errors < _ERROR_RETRY:
return self.__getitem__((index + 1) % len(self.reader))
else:
raise e
self._consecutive_errors = 0
if self.input_img_mode and not self.load_bytes:
img = img.convert(self.input_img_mode)
if self.transform is not None:
img = self.transform(img)
if target is None:
target = -1
elif self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self):
return len(self.reader)
def filename(self, index, basename=False, absolute=False):
return self.reader.filename(index, basename, absolute)
def filenames(self, basename=False, absolute=False):
return self.reader.filenames(basename, absolute)
class IterableImageDataset(data.IterableDataset):
def __init__(
self,
root,
reader=None,
split='train',
class_map=None,
is_training=False,
batch_size=1,
num_samples=None,
seed=42,
repeats=0,
download=False,
input_img_mode='RGB',
input_key=None,
target_key=None,
transform=None,
target_transform=None,
max_steps=None,
**kwargs,
):
assert reader is not None
if isinstance(reader, str):
self.reader = create_reader(
reader,
root=root,
split=split,
class_map=class_map,
is_training=is_training,
batch_size=batch_size,
num_samples=num_samples,
seed=seed,
repeats=repeats,
download=download,
input_img_mode=input_img_mode,
input_key=input_key,
target_key=target_key,
max_steps=max_steps,
**kwargs,
)
else:
self.reader = reader
self.transform = transform
self.target_transform = target_transform
self._consecutive_errors = 0
def __iter__(self):
for img, target in self.reader:
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
yield img, target
def __len__(self):
if hasattr(self.reader, '__len__'):
return len(self.reader)
else:
return 0
def set_epoch(self, count):
# TFDS and WDS need external epoch count for deterministic cross process shuffle
if hasattr(self.reader, 'set_epoch'):
self.reader.set_epoch(count)
def set_loader_cfg(
self,
num_workers: Optional[int] = None,
):
# TFDS and WDS readers need # workers for correct # samples estimate before loader processes created
if hasattr(self.reader, 'set_loader_cfg'):
self.reader.set_loader_cfg(num_workers=num_workers)
def filename(self, index, basename=False, absolute=False):
assert False, 'Filename lookup by index not supported, use filenames().'
def filenames(self, basename=False, absolute=False):
return self.reader.filenames(basename, absolute)
class AugMixDataset(torch.utils.data.Dataset):
"""Dataset wrapper to perform AugMix or other clean/augmentation mixes"""
def __init__(self, dataset, num_splits=2):
self.augmentation = None
self.normalize = None
self.dataset = dataset
if self.dataset.transform is not None:
self._set_transforms(self.dataset.transform)
self.num_splits = num_splits
def _set_transforms(self, x):
assert isinstance(x, (list, tuple)) and len(x) == 3, 'Expecting a tuple/list of 3 transforms'
self.dataset.transform = x[0]
self.augmentation = x[1]
self.normalize = x[2]
@property
def transform(self):
return self.dataset.transform
@transform.setter
def transform(self, x):
self._set_transforms(x)
def _normalize(self, x):
return x if self.normalize is None else self.normalize(x)
def __getitem__(self, i):
x, y = self.dataset[i] # all splits share the same dataset base transform
x_list = [self._normalize(x)] # first split only normalizes (this is the 'clean' split)
# run the full augmentation on the remaining splits
for _ in range(self.num_splits - 1):
x_list.append(self._normalize(self.augmentation(x)))
return tuple(x_list), y
def __len__(self):
return len(self.dataset)
| pytorch-image-models/timm/data/dataset.py/0 | {
"file_path": "pytorch-image-models/timm/data/dataset.py",
"repo_id": "pytorch-image-models",
"token_count": 2991
} |
""" A dataset reader that reads tarfile based datasets
This reader can extract image samples from:
* a single tar of image files
* a folder of multiple tarfiles containing imagefiles
* a tar of tars containing image files
Labels are based on the combined folder and/or tar name structure.
Hacked together by / Copyright 2020 Ross Wightman
"""
import logging
import os
import pickle
import tarfile
from glob import glob
from typing import List, Tuple, Dict, Set, Optional, Union
import numpy as np
from timm.utils.misc import natural_key
from .class_map import load_class_map
from .img_extensions import get_img_extensions
from .reader import Reader
_logger = logging.getLogger(__name__)
CACHE_FILENAME_SUFFIX = '_tarinfos.pickle'
class TarState:
def __init__(self, tf: tarfile.TarFile = None, ti: tarfile.TarInfo = None):
self.tf: tarfile.TarFile = tf
self.ti: tarfile.TarInfo = ti
self.children: Dict[str, TarState] = {} # child states (tars within tars)
def reset(self):
self.tf = None
def _extract_tarinfo(tf: tarfile.TarFile, parent_info: Dict, extensions: Set[str]):
sample_count = 0
for i, ti in enumerate(tf):
if not ti.isfile():
continue
dirname, basename = os.path.split(ti.path)
name, ext = os.path.splitext(basename)
ext = ext.lower()
if ext == '.tar':
with tarfile.open(fileobj=tf.extractfile(ti), mode='r|') as ctf:
child_info = dict(
name=ti.name, path=os.path.join(parent_info['path'], name), ti=ti, children=[], samples=[])
sample_count += _extract_tarinfo(ctf, child_info, extensions=extensions)
_logger.debug(f'{i}/?. Extracted child tarinfos from {ti.name}. {len(child_info["samples"])} images.')
parent_info['children'].append(child_info)
elif ext in extensions:
parent_info['samples'].append(ti)
sample_count += 1
return sample_count
def extract_tarinfos(
root,
class_name_to_idx: Optional[Dict] = None,
cache_tarinfo: Optional[bool] = None,
extensions: Optional[Union[List, Tuple, Set]] = None,
sort: bool = True
):
extensions = get_img_extensions(as_set=True) if not extensions else set(extensions)
root_is_tar = False
if os.path.isfile(root):
assert os.path.splitext(root)[-1].lower() == '.tar'
tar_filenames = [root]
root, root_name = os.path.split(root)
root_name = os.path.splitext(root_name)[0]
root_is_tar = True
else:
root_name = root.strip(os.path.sep).split(os.path.sep)[-1]
tar_filenames = glob(os.path.join(root, '*.tar'), recursive=True)
num_tars = len(tar_filenames)
tar_bytes = sum([os.path.getsize(f) for f in tar_filenames])
assert num_tars, f'No .tar files found at specified path ({root}).'
_logger.info(f'Scanning {tar_bytes/1024**2:.2f}MB of tar files...')
info = dict(tartrees=[])
cache_path = ''
if cache_tarinfo is None:
cache_tarinfo = True if tar_bytes > 10*1024**3 else False # FIXME magic number, 10GB
if cache_tarinfo:
cache_filename = '_' + root_name + CACHE_FILENAME_SUFFIX
cache_path = os.path.join(root, cache_filename)
if os.path.exists(cache_path):
_logger.info(f'Reading tar info from cache file {cache_path}.')
with open(cache_path, 'rb') as pf:
info = pickle.load(pf)
assert len(info['tartrees']) == num_tars, "Cached tartree len doesn't match number of tarfiles"
else:
for i, fn in enumerate(tar_filenames):
path = '' if root_is_tar else os.path.splitext(os.path.basename(fn))[0]
with tarfile.open(fn, mode='r|') as tf: # tarinfo scans done in streaming mode
parent_info = dict(name=os.path.relpath(fn, root), path=path, ti=None, children=[], samples=[])
num_samples = _extract_tarinfo(tf, parent_info, extensions=extensions)
num_children = len(parent_info["children"])
_logger.debug(
f'{i}/{num_tars}. Extracted tarinfos from {fn}. {num_children} children, {num_samples} samples.')
info['tartrees'].append(parent_info)
if cache_path:
_logger.info(f'Writing tar info to cache file {cache_path}.')
with open(cache_path, 'wb') as pf:
pickle.dump(info, pf)
samples = []
labels = []
build_class_map = False
if class_name_to_idx is None:
build_class_map = True
# Flatten tartree info into lists of samples and targets w/ targets based on label id via
# class map arg or from unique paths.
# NOTE: currently only flattening up to two-levels, filesystem .tars and then one level of sub-tar children
# this covers my current use cases and keeps things a little easier to test for now.
tarfiles = []
def _label_from_paths(*path, leaf_only=True):
path = os.path.join(*path).strip(os.path.sep)
return path.split(os.path.sep)[-1] if leaf_only else path.replace(os.path.sep, '_')
def _add_samples(info, fn):
added = 0
for s in info['samples']:
label = _label_from_paths(info['path'], os.path.dirname(s.path))
if not build_class_map and label not in class_name_to_idx:
continue
samples.append((s, fn, info['ti']))
labels.append(label)
added += 1
return added
_logger.info(f'Collecting samples and building tar states.')
for parent_info in info['tartrees']:
# if tartree has children, we assume all samples are at the child level
tar_name = None if root_is_tar else parent_info['name']
tar_state = TarState()
parent_added = 0
for child_info in parent_info['children']:
child_added = _add_samples(child_info, fn=tar_name)
if child_added:
tar_state.children[child_info['name']] = TarState(ti=child_info['ti'])
parent_added += child_added
parent_added += _add_samples(parent_info, fn=tar_name)
if parent_added:
tarfiles.append((tar_name, tar_state))
del info
if build_class_map:
# build class index
sorted_labels = list(sorted(set(labels), key=natural_key))
class_name_to_idx = {c: idx for idx, c in enumerate(sorted_labels)}
_logger.info(f'Mapping targets and sorting samples.')
samples_and_targets = [(s, class_name_to_idx[l]) for s, l in zip(samples, labels) if l in class_name_to_idx]
if sort:
samples_and_targets = sorted(samples_and_targets, key=lambda k: natural_key(k[0][0].path))
samples, targets = zip(*samples_and_targets)
samples = np.array(samples)
targets = np.array(targets)
_logger.info(f'Finished processing {len(samples)} samples across {len(tarfiles)} tar files.')
return samples, targets, class_name_to_idx, tarfiles
class ReaderImageInTar(Reader):
""" Multi-tarfile dataset reader where there is one .tar file per class
"""
def __init__(self, root, class_map='', cache_tarfiles=True, cache_tarinfo=None):
super().__init__()
class_name_to_idx = None
if class_map:
class_name_to_idx = load_class_map(class_map, root)
self.root = root
self.samples, self.targets, self.class_name_to_idx, tarfiles = extract_tarinfos(
self.root,
class_name_to_idx=class_name_to_idx,
cache_tarinfo=cache_tarinfo
)
self.class_idx_to_name = {v: k for k, v in self.class_name_to_idx.items()}
if len(tarfiles) == 1 and tarfiles[0][0] is None:
self.root_is_tar = True
self.tar_state = tarfiles[0][1]
else:
self.root_is_tar = False
self.tar_state = dict(tarfiles)
self.cache_tarfiles = cache_tarfiles
def __len__(self):
return len(self.samples)
def __getitem__(self, index):
sample = self.samples[index]
target = self.targets[index]
sample_ti, parent_fn, child_ti = sample
parent_abs = os.path.join(self.root, parent_fn) if parent_fn else self.root
tf = None
cache_state = None
if self.cache_tarfiles:
cache_state = self.tar_state if self.root_is_tar else self.tar_state[parent_fn]
tf = cache_state.tf
if tf is None:
tf = tarfile.open(parent_abs)
if self.cache_tarfiles:
cache_state.tf = tf
if child_ti is not None:
ctf = cache_state.children[child_ti.name].tf if self.cache_tarfiles else None
if ctf is None:
ctf = tarfile.open(fileobj=tf.extractfile(child_ti))
if self.cache_tarfiles:
cache_state.children[child_ti.name].tf = ctf
tf = ctf
return tf.extractfile(sample_ti), target
def _filename(self, index, basename=False, absolute=False):
filename = self.samples[index][0].name
if basename:
filename = os.path.basename(filename)
return filename
| pytorch-image-models/timm/data/readers/reader_image_in_tar.py/0 | {
"file_path": "pytorch-image-models/timm/data/readers/reader_image_in_tar.py",
"repo_id": "pytorch-image-models",
"token_count": 4050
} |
"""
BlurPool layer inspired by
- Kornia's Max_BlurPool2d
- Making Convolutional Networks Shift-Invariant Again :cite:`zhang2019shiftinvar`
Hacked together by Chris Ha and Ross Wightman
"""
from functools import partial
from typing import Optional, Type
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from .padding import get_padding
from .typing import LayerType
class BlurPool2d(nn.Module):
r"""Creates a module that computes blurs and downsample a given feature map.
See :cite:`zhang2019shiftinvar` for more details.
Corresponds to the Downsample class, which does blurring and subsampling
Args:
channels = Number of input channels
filt_size (int): binomial filter size for blurring. currently supports 3 (default) and 5.
stride (int): downsampling filter stride
Returns:
torch.Tensor: the transformed tensor.
"""
def __init__(
self,
channels: Optional[int] = None,
filt_size: int = 3,
stride: int = 2,
pad_mode: str = 'reflect',
) -> None:
super(BlurPool2d, self).__init__()
assert filt_size > 1
self.channels = channels
self.filt_size = filt_size
self.stride = stride
self.pad_mode = pad_mode
self.padding = [get_padding(filt_size, stride, dilation=1)] * 4
coeffs = torch.tensor((np.poly1d((0.5, 0.5)) ** (self.filt_size - 1)).coeffs.astype(np.float32))
blur_filter = (coeffs[:, None] * coeffs[None, :])[None, None, :, :]
if channels is not None:
blur_filter = blur_filter.repeat(self.channels, 1, 1, 1)
self.register_buffer('filt', blur_filter, persistent=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = F.pad(x, self.padding, mode=self.pad_mode)
if self.channels is None:
channels = x.shape[1]
weight = self.filt.expand(channels, 1, self.filt_size, self.filt_size)
else:
channels = self.channels
weight = self.filt
return F.conv2d(x, weight, stride=self.stride, groups=channels)
def create_aa(
aa_layer: LayerType,
channels: Optional[int] = None,
stride: int = 2,
enable: bool = True,
noop: Optional[Type[nn.Module]] = nn.Identity
) -> nn.Module:
""" Anti-aliasing """
if not aa_layer or not enable:
return noop() if noop is not None else None
if isinstance(aa_layer, str):
aa_layer = aa_layer.lower().replace('_', '').replace('-', '')
if aa_layer == 'avg' or aa_layer == 'avgpool':
aa_layer = nn.AvgPool2d
elif aa_layer == 'blur' or aa_layer == 'blurpool':
aa_layer = BlurPool2d
elif aa_layer == 'blurpc':
aa_layer = partial(BlurPool2d, pad_mode='constant')
else:
assert False, f"Unknown anti-aliasing layer ({aa_layer})."
try:
return aa_layer(channels=channels, stride=stride)
except TypeError as e:
return aa_layer(stride)
| pytorch-image-models/timm/layers/blur_pool.py/0 | {
"file_path": "pytorch-image-models/timm/layers/blur_pool.py",
"repo_id": "pytorch-image-models",
"token_count": 1352
} |
""" 'Fast' Normalization Functions
For GroupNorm and LayerNorm these functions bypass typical AMP upcast to float32.
Additionally, for LayerNorm, the APEX fused LN is used if available (which also does not upcast)
Hacked together by / Copyright 2022 Ross Wightman
"""
from typing import List, Optional
import torch
from torch.nn import functional as F
try:
from apex.normalization.fused_layer_norm import fused_layer_norm_affine
has_apex = True
except ImportError:
has_apex = False
try:
from apex.normalization.fused_layer_norm import fused_rms_norm_affine, fused_rms_norm
has_apex_rmsnorm = True
except ImportError:
has_apex_rmsnorm = False
has_torch_rms_norm = hasattr(F, 'rms_norm')
# fast (ie lower precision LN) can be disabled with this flag if issues crop up
_USE_FAST_NORM = False # defaulting to False for now
def get_autocast_dtype(device: str = 'cuda'):
try:
return torch.get_autocast_dtype(device)
except (AttributeError, TypeError):
# dispatch to older device specific fns, only covering cuda/cpu devices here
if device == 'cpu':
return torch.get_autocast_cpu_dtype()
else:
assert device == 'cuda'
return torch.get_autocast_gpu_dtype()
def is_autocast_enabled(device: str = 'cuda'):
try:
return torch.is_autocast_enabled(device)
except TypeError:
# dispatch to older device specific fns, only covering cuda/cpu devices here
if device == 'cpu':
return torch.is_autocast_cpu_enabled()
else:
assert device == 'cuda'
return torch.is_autocast_enabled() # defaults cuda (only cuda on older pytorch)
def is_fast_norm():
return _USE_FAST_NORM
def set_fast_norm(enable=True):
global _USE_FAST_NORM
_USE_FAST_NORM = enable
def fast_group_norm(
x: torch.Tensor,
num_groups: int,
weight: Optional[torch.Tensor] = None,
bias: Optional[torch.Tensor] = None,
eps: float = 1e-5
) -> torch.Tensor:
if torch.jit.is_scripting():
# currently cannot use is_autocast_enabled within torchscript
return F.group_norm(x, num_groups, weight, bias, eps)
if is_autocast_enabled(x.device.type):
# normally native AMP casts GN inputs to float32
# here we use the low precision autocast dtype
dt = get_autocast_dtype(x.device.type)
x, weight, bias = x.to(dt), weight.to(dt), bias.to(dt) if bias is not None else None
with torch.amp.autocast(device_type=x.device.type, enabled=False):
return F.group_norm(x, num_groups, weight, bias, eps)
def fast_layer_norm(
x: torch.Tensor,
normalized_shape: List[int],
weight: Optional[torch.Tensor] = None,
bias: Optional[torch.Tensor] = None,
eps: float = 1e-5
) -> torch.Tensor:
if torch.jit.is_scripting():
# currently cannot use is_autocast_enabled within torchscript
return F.layer_norm(x, normalized_shape, weight, bias, eps)
if has_apex:
return fused_layer_norm_affine(x, weight, bias, normalized_shape, eps)
if is_autocast_enabled(x.device.type):
# normally native AMP casts LN inputs to float32
# apex LN does not, this is behaving like Apex
dt = get_autocast_dtype(x.device.type)
x, weight, bias = x.to(dt), weight.to(dt), bias.to(dt) if bias is not None else None
with torch.amp.autocast(device_type=x.device.type, enabled=False):
return F.layer_norm(x, normalized_shape, weight, bias, eps)
def rms_norm(
x: torch.Tensor,
normalized_shape: List[int],
weight: Optional[torch.Tensor] = None,
eps: float = 1e-5,
):
norm_ndim = len(normalized_shape)
v = x.pow(2)
if torch.jit.is_scripting():
# ndim = len(x.shape)
# dims = list(range(ndim - norm_ndim, ndim)) # this doesn't work on pytorch <= 1.13.x
# NOTE -ve dims cause torchscript to crash in some cases, out of options to work around
assert norm_ndim == 1
v = torch.mean(v, dim=-1).unsqueeze(-1) # ts crashes with -ve dim + keepdim=True
else:
dims = tuple(range(-1, -norm_ndim - 1, -1))
v = torch.mean(v, dim=dims, keepdim=True)
x = x * torch.rsqrt(v + eps)
if weight is not None:
x = x * weight
return x
def fast_rms_norm(
x: torch.Tensor,
normalized_shape: List[int],
weight: Optional[torch.Tensor] = None,
eps: float = 1e-5,
) -> torch.Tensor:
if torch.jit.is_scripting():
# this must be by itself, cannot merge with has_apex_rmsnorm
return rms_norm(x, normalized_shape, weight, eps)
if has_apex_rmsnorm:
if weight is None:
return fused_rms_norm(x, normalized_shape, eps)
else:
return fused_rms_norm_affine(x, weight, normalized_shape, eps)
if is_autocast_enabled(x.device.type):
# normally native AMP casts LN inputs to float32
# apex LN does not, this is behaving like Apex
dt = get_autocast_dtype(x.device.type)
x, weight = x.to(dt), weight.to(dt)
with torch.amp.autocast(device_type=x.device.type, enabled=False):
if has_torch_rms_norm:
x = F.rms_norm(x, normalized_shape, weight, eps)
else:
x = rms_norm(x, normalized_shape, weight, eps)
return x
def simple_norm(
x: torch.Tensor,
normalized_shape: List[int],
weight: Optional[torch.Tensor] = None,
eps: float = 1e-5,
):
norm_ndim = len(normalized_shape)
if torch.jit.is_scripting():
# ndim = len(x.shape)
# dims = list(range(ndim - norm_ndim, ndim)) # this doesn't work on pytorch <= 1.13.x
# NOTE -ve dims cause torchscript to crash in some cases, out of options to work around
assert norm_ndim == 1
v = torch.var(x, dim=-1).unsqueeze(-1) # ts crashes with -ve dim + keepdim=True
else:
dims = tuple(range(-1, -norm_ndim - 1, -1))
v = torch.var(x, dim=dims, keepdim=True)
x = x * torch.rsqrt(v + eps)
if weight is not None:
x = x * weight
return x
def fast_simple_norm(
x: torch.Tensor,
normalized_shape: List[int],
weight: Optional[torch.Tensor] = None,
eps: float = 1e-5,
) -> torch.Tensor:
if torch.jit.is_scripting():
# this must be by itself, cannot merge with has_apex_rmsnorm
return simple_norm(x, normalized_shape, weight, eps)
if is_autocast_enabled(x.device.type):
# normally native AMP casts LN inputs to float32
# apex LN does not, this is behaving like Apex
dt = get_autocast_dtype(x.device.type)
x, weight = x.to(dt), weight.to(dt)
with torch.amp.autocast(device_type=x.device.type, enabled=False):
x = simple_norm(x, normalized_shape, weight, eps)
return x
| pytorch-image-models/timm/layers/fast_norm.py/0 | {
"file_path": "pytorch-image-models/timm/layers/fast_norm.py",
"repo_id": "pytorch-image-models",
"token_count": 2881
} |
""" PyTorch Mixed Convolution
Paper: MixConv: Mixed Depthwise Convolutional Kernels (https://arxiv.org/abs/1907.09595)
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
from torch import nn as nn
from .conv2d_same import create_conv2d_pad
def _split_channels(num_chan, num_groups):
split = [num_chan // num_groups for _ in range(num_groups)]
split[0] += num_chan - sum(split)
return split
class MixedConv2d(nn.ModuleDict):
""" Mixed Grouped Convolution
Based on MDConv and GroupedConv in MixNet impl:
https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mixnet/custom_layers.py
"""
def __init__(self, in_channels, out_channels, kernel_size=3,
stride=1, padding='', dilation=1, depthwise=False, **kwargs):
super(MixedConv2d, self).__init__()
kernel_size = kernel_size if isinstance(kernel_size, list) else [kernel_size]
num_groups = len(kernel_size)
in_splits = _split_channels(in_channels, num_groups)
out_splits = _split_channels(out_channels, num_groups)
self.in_channels = sum(in_splits)
self.out_channels = sum(out_splits)
for idx, (k, in_ch, out_ch) in enumerate(zip(kernel_size, in_splits, out_splits)):
conv_groups = in_ch if depthwise else 1
# use add_module to keep key space clean
self.add_module(
str(idx),
create_conv2d_pad(
in_ch, out_ch, k, stride=stride,
padding=padding, dilation=dilation, groups=conv_groups, **kwargs)
)
self.splits = in_splits
def forward(self, x):
x_split = torch.split(x, self.splits, 1)
x_out = [c(x_split[i]) for i, c in enumerate(self.values())]
x = torch.cat(x_out, 1)
return x
| pytorch-image-models/timm/layers/mixed_conv2d.py/0 | {
"file_path": "pytorch-image-models/timm/layers/mixed_conv2d.py",
"repo_id": "pytorch-image-models",
"token_count": 834
} |
""" Split Attention Conv2d (for ResNeSt Models)
Paper: `ResNeSt: Split-Attention Networks` - /https://arxiv.org/abs/2004.08955
Adapted from original PyTorch impl at https://github.com/zhanghang1989/ResNeSt
Modified for torchscript compat, performance, and consistency with timm by Ross Wightman
"""
import torch
import torch.nn.functional as F
from torch import nn
from .helpers import make_divisible
class RadixSoftmax(nn.Module):
def __init__(self, radix, cardinality):
super(RadixSoftmax, self).__init__()
self.radix = radix
self.cardinality = cardinality
def forward(self, x):
batch = x.size(0)
if self.radix > 1:
x = x.view(batch, self.cardinality, self.radix, -1).transpose(1, 2)
x = F.softmax(x, dim=1)
x = x.reshape(batch, -1)
else:
x = torch.sigmoid(x)
return x
class SplitAttn(nn.Module):
"""Split-Attention (aka Splat)
"""
def __init__(self, in_channels, out_channels=None, kernel_size=3, stride=1, padding=None,
dilation=1, groups=1, bias=False, radix=2, rd_ratio=0.25, rd_channels=None, rd_divisor=8,
act_layer=nn.ReLU, norm_layer=None, drop_layer=None, **kwargs):
super(SplitAttn, self).__init__()
out_channels = out_channels or in_channels
self.radix = radix
mid_chs = out_channels * radix
if rd_channels is None:
attn_chs = make_divisible(in_channels * radix * rd_ratio, min_value=32, divisor=rd_divisor)
else:
attn_chs = rd_channels * radix
padding = kernel_size // 2 if padding is None else padding
self.conv = nn.Conv2d(
in_channels, mid_chs, kernel_size, stride, padding, dilation,
groups=groups * radix, bias=bias, **kwargs)
self.bn0 = norm_layer(mid_chs) if norm_layer else nn.Identity()
self.drop = drop_layer() if drop_layer is not None else nn.Identity()
self.act0 = act_layer(inplace=True)
self.fc1 = nn.Conv2d(out_channels, attn_chs, 1, groups=groups)
self.bn1 = norm_layer(attn_chs) if norm_layer else nn.Identity()
self.act1 = act_layer(inplace=True)
self.fc2 = nn.Conv2d(attn_chs, mid_chs, 1, groups=groups)
self.rsoftmax = RadixSoftmax(radix, groups)
def forward(self, x):
x = self.conv(x)
x = self.bn0(x)
x = self.drop(x)
x = self.act0(x)
B, RC, H, W = x.shape
if self.radix > 1:
x = x.reshape((B, self.radix, RC // self.radix, H, W))
x_gap = x.sum(dim=1)
else:
x_gap = x
x_gap = x_gap.mean((2, 3), keepdim=True)
x_gap = self.fc1(x_gap)
x_gap = self.bn1(x_gap)
x_gap = self.act1(x_gap)
x_attn = self.fc2(x_gap)
x_attn = self.rsoftmax(x_attn).view(B, -1, 1, 1)
if self.radix > 1:
out = (x * x_attn.reshape((B, self.radix, RC // self.radix, 1, 1))).sum(dim=1)
else:
out = x * x_attn
return out.contiguous()
| pytorch-image-models/timm/layers/split_attn.py/0 | {
"file_path": "pytorch-image-models/timm/layers/split_attn.py",
"repo_id": "pytorch-image-models",
"token_count": 1533
} |
""" EfficientNet, MobileNetV3, etc Builder
Assembles EfficieNet and related network feature blocks from string definitions.
Handles stride, dilation calculations, and selects feature extraction points.
Hacked together by / Copyright 2019, Ross Wightman
"""
from typing import Callable, Optional
import logging
import math
import re
from copy import deepcopy
from functools import partial
from typing import Any, Dict, List
import torch.nn as nn
from timm.layers import CondConv2d, get_condconv_initializer, get_act_layer, get_attn, make_divisible, LayerType
from ._efficientnet_blocks import *
from ._manipulate import named_modules
__all__ = ["EfficientNetBuilder", "BlockArgs", "decode_arch_def", "efficientnet_init_weights",
'resolve_bn_args', 'resolve_act_layer', 'round_channels', 'BN_MOMENTUM_TF_DEFAULT', 'BN_EPS_TF_DEFAULT']
_logger = logging.getLogger(__name__)
_DEBUG_BUILDER = False
# Defaults used for Google/Tensorflow training of mobile networks /w RMSprop as per
# papers and TF reference implementations. PT momentum equiv for TF decay is (1 - TF decay)
# NOTE: momentum varies btw .99 and .9997 depending on source
# .99 in official TF TPU impl
# .9997 (/w .999 in search space) for paper
BN_MOMENTUM_TF_DEFAULT = 1 - 0.99
BN_EPS_TF_DEFAULT = 1e-3
_BN_ARGS_TF = dict(momentum=BN_MOMENTUM_TF_DEFAULT, eps=BN_EPS_TF_DEFAULT)
BlockArgs = List[List[Dict[str, Any]]]
def get_bn_args_tf():
return _BN_ARGS_TF.copy()
def resolve_bn_args(kwargs):
bn_args = {}
bn_momentum = kwargs.pop('bn_momentum', None)
if bn_momentum is not None:
bn_args['momentum'] = bn_momentum
bn_eps = kwargs.pop('bn_eps', None)
if bn_eps is not None:
bn_args['eps'] = bn_eps
return bn_args
def resolve_act_layer(kwargs, default='relu'):
return get_act_layer(kwargs.pop('act_layer', default))
def round_channels(channels, multiplier=1.0, divisor=8, channel_min=None, round_limit=0.9):
"""Round number of filters based on depth multiplier."""
if not multiplier:
return channels
return make_divisible(channels * multiplier, divisor, channel_min, round_limit=round_limit)
def _log_info_if(msg, condition):
if condition:
_logger.info(msg)
def _parse_ksize(ss):
if ss.isdigit():
return int(ss)
else:
return [int(k) for k in ss.split('.')]
def _decode_block_str(block_str):
""" Decode block definition string
Gets a list of block arg (dicts) through a string notation of arguments.
E.g. ir_r2_k3_s2_e1_i32_o16_se0.25_noskip
All args can exist in any order with the exception of the leading string which
is assumed to indicate the block type.
leading string - block type (
ir = InvertedResidual, ds = DepthwiseSep, dsa = DeptwhiseSep with pw act, cn = ConvBnAct)
r - number of repeat blocks,
k - kernel size,
s - strides (1-9),
e - expansion ratio,
c - output channels,
se - squeeze/excitation ratio
n - activation fn ('re', 'r6', 'hs', or 'sw')
Args:
block_str: a string representation of block arguments.
Returns:
A list of block args (dicts)
Raises:
ValueError: if the string def not properly specified (TODO)
"""
assert isinstance(block_str, str)
ops = block_str.split('_')
block_type = ops[0] # take the block type off the front
ops = ops[1:]
options = {}
skip = None
for op in ops:
# string options being checked on individual basis, combine if they grow
if op == 'noskip':
skip = False # force no skip connection
elif op == 'skip':
skip = True # force a skip connection
elif op.startswith('n'):
# activation fn
key = op[0]
v = op[1:]
if v == 're':
value = get_act_layer('relu')
elif v == 'r6':
value = get_act_layer('relu6')
elif v == 'hs':
value = get_act_layer('hard_swish')
elif v == 'sw':
value = get_act_layer('swish') # aka SiLU
elif v == 'mi':
value = get_act_layer('mish')
else:
continue
options[key] = value
else:
# all numeric options
splits = re.split(r'(\d.*)', op)
if len(splits) >= 2:
key, value = splits[:2]
options[key] = value
# if act_layer is None, the model default (passed to model init) will be used
act_layer = options['n'] if 'n' in options else None
start_kernel_size = _parse_ksize(options['a']) if 'a' in options else 1
end_kernel_size = _parse_ksize(options['p']) if 'p' in options else 1
force_in_chs = int(options['fc']) if 'fc' in options else 0 # FIXME hack to deal with in_chs issue in TPU def
num_repeat = int(options['r'])
# each type of block has different valid arguments, fill accordingly
block_args = dict(
block_type=block_type,
out_chs=int(options['c']),
stride=int(options['s']),
act_layer=act_layer,
)
if block_type == 'ir':
block_args.update(dict(
dw_kernel_size=_parse_ksize(options['k']),
exp_kernel_size=start_kernel_size,
pw_kernel_size=end_kernel_size,
exp_ratio=float(options['e']),
se_ratio=float(options.get('se', 0.)),
noskip=skip is False,
s2d=int(options.get('d', 0)) > 0,
))
if 'cc' in options:
block_args['num_experts'] = int(options['cc'])
elif block_type == 'ds' or block_type == 'dsa':
block_args.update(dict(
dw_kernel_size=_parse_ksize(options['k']),
pw_kernel_size=end_kernel_size,
se_ratio=float(options.get('se', 0.)),
pw_act=block_type == 'dsa',
noskip=block_type == 'dsa' or skip is False,
s2d=int(options.get('d', 0)) > 0,
))
elif block_type == 'er':
block_args.update(dict(
exp_kernel_size=_parse_ksize(options['k']),
pw_kernel_size=end_kernel_size,
exp_ratio=float(options['e']),
force_in_chs=force_in_chs,
se_ratio=float(options.get('se', 0.)),
noskip=skip is False,
))
elif block_type == 'cn':
block_args.update(dict(
kernel_size=int(options['k']),
skip=skip is True,
))
elif block_type == 'uir':
# override exp / proj kernels for start/end in uir block
start_kernel_size = _parse_ksize(options['a']) if 'a' in options else 0
end_kernel_size = _parse_ksize(options['p']) if 'p' in options else 0
block_args.update(dict(
dw_kernel_size_start=start_kernel_size, # overload exp ks arg for dw start
dw_kernel_size_mid=_parse_ksize(options['k']),
dw_kernel_size_end=end_kernel_size, # overload pw ks arg for dw end
exp_ratio=float(options['e']),
se_ratio=float(options.get('se', 0.)),
noskip=skip is False,
))
elif block_type == 'mha':
kv_dim = int(options['d'])
block_args.update(dict(
dw_kernel_size=_parse_ksize(options['k']),
num_heads=int(options['h']),
key_dim=kv_dim,
value_dim=kv_dim,
kv_stride=int(options.get('v', 1)),
noskip=skip is False,
))
elif block_type == 'mqa':
kv_dim = int(options['d'])
block_args.update(dict(
dw_kernel_size=_parse_ksize(options['k']),
num_heads=int(options['h']),
key_dim=kv_dim,
value_dim=kv_dim,
kv_stride=int(options.get('v', 1)),
noskip=skip is False,
))
else:
assert False, 'Unknown block type (%s)' % block_type
if 'gs' in options:
block_args['group_size'] = int(options['gs'])
return block_args, num_repeat
def _scale_stage_depth(stack_args, repeats, depth_multiplier=1.0, depth_trunc='ceil'):
""" Per-stage depth scaling
Scales the block repeats in each stage. This depth scaling impl maintains
compatibility with the EfficientNet scaling method, while allowing sensible
scaling for other models that may have multiple block arg definitions in each stage.
"""
# We scale the total repeat count for each stage, there may be multiple
# block arg defs per stage so we need to sum.
num_repeat = sum(repeats)
if depth_trunc == 'round':
# Truncating to int by rounding allows stages with few repeats to remain
# proportionally smaller for longer. This is a good choice when stage definitions
# include single repeat stages that we'd prefer to keep that way as long as possible
num_repeat_scaled = max(1, round(num_repeat * depth_multiplier))
else:
# The default for EfficientNet truncates repeats to int via 'ceil'.
# Any multiplier > 1.0 will result in an increased depth for every stage.
num_repeat_scaled = int(math.ceil(num_repeat * depth_multiplier))
# Proportionally distribute repeat count scaling to each block definition in the stage.
# Allocation is done in reverse as it results in the first block being less likely to be scaled.
# The first block makes less sense to repeat in most of the arch definitions.
repeats_scaled = []
for r in repeats[::-1]:
rs = max(1, round((r / num_repeat * num_repeat_scaled)))
repeats_scaled.append(rs)
num_repeat -= r
num_repeat_scaled -= rs
repeats_scaled = repeats_scaled[::-1]
# Apply the calculated scaling to each block arg in the stage
sa_scaled = []
for ba, rep in zip(stack_args, repeats_scaled):
sa_scaled.extend([deepcopy(ba) for _ in range(rep)])
return sa_scaled
def decode_arch_def(
arch_def,
depth_multiplier=1.0,
depth_trunc='ceil',
experts_multiplier=1,
fix_first_last=False,
group_size=None,
):
""" Decode block architecture definition strings -> block kwargs
Args:
arch_def: architecture definition strings, list of list of strings
depth_multiplier: network depth multiplier
depth_trunc: networ depth truncation mode when applying multiplier
experts_multiplier: CondConv experts multiplier
fix_first_last: fix first and last block depths when multiplier is applied
group_size: group size override for all blocks that weren't explicitly set in arch string
Returns:
list of list of block kwargs
"""
arch_args = []
if isinstance(depth_multiplier, tuple):
assert len(depth_multiplier) == len(arch_def)
else:
depth_multiplier = (depth_multiplier,) * len(arch_def)
for stack_idx, (block_strings, multiplier) in enumerate(zip(arch_def, depth_multiplier)):
assert isinstance(block_strings, list)
stack_args = []
repeats = []
for block_str in block_strings:
assert isinstance(block_str, str)
ba, rep = _decode_block_str(block_str)
if ba.get('num_experts', 0) > 0 and experts_multiplier > 1:
ba['num_experts'] *= experts_multiplier
if group_size is not None:
ba.setdefault('group_size', group_size)
stack_args.append(ba)
repeats.append(rep)
if fix_first_last and (stack_idx == 0 or stack_idx == len(arch_def) - 1):
arch_args.append(_scale_stage_depth(stack_args, repeats, 1.0, depth_trunc))
else:
arch_args.append(_scale_stage_depth(stack_args, repeats, multiplier, depth_trunc))
return arch_args
class EfficientNetBuilder:
""" Build Trunk Blocks
This ended up being somewhat of a cross between
https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mnasnet_models.py
and
https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/maskrcnn_benchmark/modeling/backbone/fbnet_builder.py
"""
def __init__(
self,
output_stride: int = 32,
pad_type: str = '',
round_chs_fn: Callable = round_channels,
se_from_exp: bool = False,
act_layer: Optional[LayerType] = None,
norm_layer: Optional[LayerType] = None,
aa_layer: Optional[LayerType] = None,
se_layer: Optional[LayerType] = None,
drop_path_rate: float = 0.,
layer_scale_init_value: Optional[float] = None,
feature_location: str = '',
):
self.output_stride = output_stride
self.pad_type = pad_type
self.round_chs_fn = round_chs_fn
self.se_from_exp = se_from_exp # calculate se channel reduction from expanded (mid) chs
self.act_layer = act_layer
self.norm_layer = norm_layer
self.aa_layer = aa_layer
self.se_layer = get_attn(se_layer)
try:
self.se_layer(8, rd_ratio=1.0) # test if attn layer accepts rd_ratio arg
self.se_has_ratio = True
except TypeError:
self.se_has_ratio = False
self.drop_path_rate = drop_path_rate
self.layer_scale_init_value = layer_scale_init_value
if feature_location == 'depthwise':
# old 'depthwise' mode renamed 'expansion' to match TF impl, old expansion mode didn't make sense
_logger.warning("feature_location=='depthwise' is deprecated, using 'expansion'")
feature_location = 'expansion'
self.feature_location = feature_location
assert feature_location in ('bottleneck', 'expansion', '')
self.verbose = _DEBUG_BUILDER
# state updated during build, consumed by model
self.in_chs = None
self.features = []
def _make_block(self, ba, block_idx, block_count):
drop_path_rate = self.drop_path_rate * block_idx / block_count
bt = ba.pop('block_type')
ba['in_chs'] = self.in_chs
ba['out_chs'] = self.round_chs_fn(ba['out_chs'])
s2d = ba.get('s2d', 0)
if s2d > 0:
# adjust while space2depth active
ba['out_chs'] *= 4
if 'force_in_chs' in ba and ba['force_in_chs']:
# NOTE this is a hack to work around mismatch in TF EdgeEffNet impl
ba['force_in_chs'] = self.round_chs_fn(ba['force_in_chs'])
ba['pad_type'] = self.pad_type
# block act fn overrides the model default
ba['act_layer'] = ba['act_layer'] if ba['act_layer'] is not None else self.act_layer
assert ba['act_layer'] is not None
ba['norm_layer'] = self.norm_layer
ba['drop_path_rate'] = drop_path_rate
if self.aa_layer is not None:
ba['aa_layer'] = self.aa_layer
se_ratio = ba.pop('se_ratio', None)
if se_ratio and self.se_layer is not None:
if not self.se_from_exp:
# adjust se_ratio by expansion ratio if calculating se channels from block input
se_ratio /= ba.get('exp_ratio', 1.0)
if s2d == 1:
# adjust for start of space2depth
se_ratio /= 4
if self.se_has_ratio:
ba['se_layer'] = partial(self.se_layer, rd_ratio=se_ratio)
else:
ba['se_layer'] = self.se_layer
if bt == 'ir':
_log_info_if(' InvertedResidual {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = CondConvResidual(**ba) if ba.get('num_experts', 0) else InvertedResidual(**ba)
elif bt == 'ds' or bt == 'dsa':
_log_info_if(' DepthwiseSeparable {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = DepthwiseSeparableConv(**ba)
elif bt == 'er':
_log_info_if(' EdgeResidual {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = EdgeResidual(**ba)
elif bt == 'cn':
_log_info_if(' ConvBnAct {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = ConvBnAct(**ba)
elif bt == 'uir':
_log_info_if(' UniversalInvertedResidual {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = UniversalInvertedResidual(**ba, layer_scale_init_value=self.layer_scale_init_value)
elif bt == 'mqa':
_log_info_if(' MobileMultiQueryAttention {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = MobileAttention(**ba, use_multi_query=True, layer_scale_init_value=self.layer_scale_init_value)
elif bt == 'mha':
_log_info_if(' MobileMultiHeadAttention {}, Args: {}'.format(block_idx, str(ba)), self.verbose)
block = MobileAttention(**ba, layer_scale_init_value=self.layer_scale_init_value)
else:
assert False, 'Unknown block type (%s) while building model.' % bt
self.in_chs = ba['out_chs'] # update in_chs for arg of next block
return block
def __call__(self, in_chs, model_block_args):
""" Build the blocks
Args:
in_chs: Number of input-channels passed to first block
model_block_args: A list of lists, outer list defines stages, inner
list contains strings defining block configuration(s)
Return:
List of block stacks (each stack wrapped in nn.Sequential)
"""
_log_info_if('Building model trunk with %d stages...' % len(model_block_args), self.verbose)
self.in_chs = in_chs
total_block_count = sum([len(x) for x in model_block_args])
total_block_idx = 0
current_stride = 2
current_dilation = 1
stages = []
if model_block_args[0][0]['stride'] > 1:
# if the first block starts with a stride, we need to extract first level feat from stem
feature_info = dict(module='bn1', num_chs=in_chs, stage=0, reduction=current_stride)
self.features.append(feature_info)
# outer list of block_args defines the stacks
space2depth = 0
for stack_idx, stack_args in enumerate(model_block_args):
last_stack = stack_idx + 1 == len(model_block_args)
_log_info_if('Stack: {}'.format(stack_idx), self.verbose)
assert isinstance(stack_args, list)
blocks = []
# each stack (stage of blocks) contains a list of block arguments
for block_idx, block_args in enumerate(stack_args):
last_block = block_idx + 1 == len(stack_args)
_log_info_if(' Block: {}'.format(block_idx), self.verbose)
assert block_args['stride'] in (1, 2)
if block_idx >= 1: # only the first block in any stack can have a stride > 1
block_args['stride'] = 1
if not space2depth and block_args.pop('s2d', False):
assert block_args['stride'] == 1
space2depth = 1
if space2depth > 0:
# FIXME s2d is a WIP
if space2depth == 2 and block_args['stride'] == 2:
block_args['stride'] = 1
# to end s2d region, need to correct expansion and se ratio relative to input
block_args['exp_ratio'] /= 4
space2depth = 0
else:
block_args['s2d'] = space2depth
extract_features = False
if last_block:
next_stack_idx = stack_idx + 1
extract_features = next_stack_idx >= len(model_block_args) or \
model_block_args[next_stack_idx][0]['stride'] > 1
next_dilation = current_dilation
if block_args['stride'] > 1:
next_output_stride = current_stride * block_args['stride']
if next_output_stride > self.output_stride:
next_dilation = current_dilation * block_args['stride']
block_args['stride'] = 1
_log_info_if(' Converting stride to dilation to maintain output_stride=={}'.format(
self.output_stride), self.verbose)
else:
current_stride = next_output_stride
block_args['dilation'] = current_dilation
if next_dilation != current_dilation:
current_dilation = next_dilation
# create the block
block = self._make_block(block_args, total_block_idx, total_block_count)
blocks.append(block)
if space2depth == 1:
space2depth = 2
# stash feature module name and channel info for model feature extraction
if extract_features:
feature_info = dict(
stage=stack_idx + 1,
reduction=current_stride,
**block.feature_info(self.feature_location),
)
leaf_name = feature_info.get('module', '')
if leaf_name:
feature_info['module'] = '.'.join([f'blocks.{stack_idx}.{block_idx}', leaf_name])
else:
assert last_block
feature_info['module'] = f'blocks.{stack_idx}'
self.features.append(feature_info)
total_block_idx += 1 # incr global block idx (across all stacks)
stages.append(nn.Sequential(*blocks))
return stages
def _init_weight_goog(m, n='', fix_group_fanout=True):
""" Weight initialization as per Tensorflow official implementations.
Args:
m (nn.Module): module to init
n (str): module name
fix_group_fanout (bool): enable correct (matching Tensorflow TPU impl) fanout calculation w/ group convs
Handles layers in EfficientNet, EfficientNet-CondConv, MixNet, MnasNet, MobileNetV3, etc:
* https://github.com/tensorflow/tpu/blob/master/models/official/mnasnet/mnasnet_model.py
* https://github.com/tensorflow/tpu/blob/master/models/official/efficientnet/efficientnet_model.py
"""
if isinstance(m, CondConv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
if fix_group_fanout:
fan_out //= m.groups
init_weight_fn = get_condconv_initializer(
lambda w: nn.init.normal_(w, 0, math.sqrt(2.0 / fan_out)), m.num_experts, m.weight_shape)
init_weight_fn(m.weight)
if m.bias is not None:
nn.init.zeros_(m.bias)
elif isinstance(m, nn.Conv2d):
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
if fix_group_fanout:
fan_out //= m.groups
nn.init.normal_(m.weight, 0, math.sqrt(2.0 / fan_out))
if m.bias is not None:
nn.init.zeros_(m.bias)
elif isinstance(m, nn.BatchNorm2d):
nn.init.ones_(m.weight)
nn.init.zeros_(m.bias)
elif isinstance(m, nn.Linear):
fan_out = m.weight.size(0) # fan-out
fan_in = 0
if 'routing_fn' in n:
fan_in = m.weight.size(1)
init_range = 1.0 / math.sqrt(fan_in + fan_out)
nn.init.uniform_(m.weight, -init_range, init_range)
nn.init.zeros_(m.bias)
def efficientnet_init_weights(model: nn.Module, init_fn=None):
init_fn = init_fn or _init_weight_goog
for n, m in model.named_modules():
init_fn(m, n)
# iterate and call any module.init_weights() fn, children first
for n, m in named_modules(model):
if hasattr(m, 'init_weights'):
m.init_weights()
| pytorch-image-models/timm/models/_efficientnet_builder.py/0 | {
"file_path": "pytorch-image-models/timm/models/_efficientnet_builder.py",
"repo_id": "pytorch-image-models",
"token_count": 10990
} |
""" Bring-Your-Own-Attention Network
A flexible network w/ dataclass based config for stacking NN blocks including
self-attention (or similar) layers.
Currently used to implement experimental variants of:
* Bottleneck Transformers
* Lambda ResNets
* HaloNets
Consider all of the models definitions here as experimental WIP and likely to change.
Hacked together by / copyright Ross Wightman, 2021.
"""
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from ._builder import build_model_with_cfg
from ._registry import register_model, generate_default_cfgs
from .byobnet import ByoBlockCfg, ByoModelCfg, ByobNet, interleave_blocks
__all__ = []
model_cfgs = dict(
botnet26t=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25),
ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25),
ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
fixed_input_size=True,
self_attn_layer='bottleneck',
self_attn_kwargs=dict()
),
sebotnet33ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=1024, s=2, gs=0, br=0.25),
ByoBlockCfg('self_attn', d=2, c=1536, s=2, gs=0, br=0.333),
),
stem_chs=64,
stem_type='tiered',
stem_pool='',
act_layer='silu',
num_features=1280,
attn_layer='se',
self_attn_layer='bottleneck',
self_attn_kwargs=dict()
),
botnet50ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), every=4, d=4, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
act_layer='silu',
fixed_input_size=True,
self_attn_layer='bottleneck',
self_attn_kwargs=dict()
),
eca_botnext26ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25),
ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=16, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=16, br=0.25),
ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
fixed_input_size=True,
act_layer='silu',
attn_layer='eca',
self_attn_layer='bottleneck',
self_attn_kwargs=dict(dim_head=16)
),
halonet_h1=ByoModelCfg(
blocks=(
ByoBlockCfg(type='self_attn', d=3, c=64, s=1, gs=0, br=1.0),
ByoBlockCfg(type='self_attn', d=3, c=128, s=2, gs=0, br=1.0),
ByoBlockCfg(type='self_attn', d=10, c=256, s=2, gs=0, br=1.0),
ByoBlockCfg(type='self_attn', d=3, c=512, s=2, gs=0, br=1.0),
),
stem_chs=64,
stem_type='7x7',
stem_pool='maxpool',
self_attn_layer='halo',
self_attn_kwargs=dict(block_size=8, halo_size=3),
),
halonet26t=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25),
ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25),
ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
self_attn_layer='halo',
self_attn_kwargs=dict(block_size=8, halo_size=2)
),
sehalonet33ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), every=[2], d=3, c=1024, s=2, gs=0, br=0.25),
ByoBlockCfg('self_attn', d=2, c=1536, s=2, gs=0, br=0.333),
),
stem_chs=64,
stem_type='tiered',
stem_pool='',
act_layer='silu',
num_features=1280,
attn_layer='se',
self_attn_layer='halo',
self_attn_kwargs=dict(block_size=8, halo_size=3)
),
halonet50ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25),
interleave_blocks(
types=('bottle', 'self_attn'), every=4, d=4, c=512, s=2, gs=0, br=0.25,
self_attn_layer='halo', self_attn_kwargs=dict(block_size=8, halo_size=3, num_heads=4)),
interleave_blocks(types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
act_layer='silu',
self_attn_layer='halo',
self_attn_kwargs=dict(block_size=8, halo_size=3)
),
eca_halonext26ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=16, br=0.25),
ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=16, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=16, br=0.25),
ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=16, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
act_layer='silu',
attn_layer='eca',
self_attn_layer='halo',
self_attn_kwargs=dict(block_size=8, halo_size=2, dim_head=16)
),
lambda_resnet26t=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25),
ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25),
ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
self_attn_layer='lambda',
self_attn_kwargs=dict(r=9)
),
lambda_resnet50ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), every=4, d=4, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
act_layer='silu',
self_attn_layer='lambda',
self_attn_kwargs=dict(r=9)
),
lambda_resnet26rpt_256=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=256, s=1, gs=0, br=0.25),
ByoBlockCfg(type='bottle', d=2, c=512, s=2, gs=0, br=0.25),
interleave_blocks(types=('bottle', 'self_attn'), d=2, c=1024, s=2, gs=0, br=0.25),
ByoBlockCfg(type='self_attn', d=2, c=2048, s=2, gs=0, br=0.25),
),
stem_chs=64,
stem_type='tiered',
stem_pool='maxpool',
self_attn_layer='lambda',
self_attn_kwargs=dict(r=None)
),
# experimental
haloregnetz_b=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=2, c=48, s=2, gs=16, br=3),
ByoBlockCfg(type='bottle', d=6, c=96, s=2, gs=16, br=3),
interleave_blocks(types=('bottle', 'self_attn'), every=3, d=12, c=192, s=2, gs=16, br=3),
ByoBlockCfg('self_attn', d=2, c=288, s=2, gs=16, br=3),
),
stem_chs=32,
stem_pool='',
downsample='',
num_features=1536,
act_layer='silu',
attn_layer='se',
attn_kwargs=dict(rd_ratio=0.25),
block_kwargs=dict(bottle_in=True, linear_out=True),
self_attn_layer='halo',
self_attn_kwargs=dict(block_size=7, halo_size=2, qk_ratio=0.33)
),
# experimental
lamhalobotnet50ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25),
interleave_blocks(
types=('bottle', 'self_attn'), d=4, c=512, s=2, gs=0, br=0.25,
self_attn_layer='lambda', self_attn_kwargs=dict(r=13)),
interleave_blocks(
types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25,
self_attn_layer='halo', self_attn_kwargs=dict(halo_size=3)),
interleave_blocks(
types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25,
self_attn_layer='bottleneck', self_attn_kwargs=dict()),
),
stem_chs=64,
stem_type='tiered',
stem_pool='',
act_layer='silu',
),
halo2botnet50ts=ByoModelCfg(
blocks=(
ByoBlockCfg(type='bottle', d=3, c=256, s=1, gs=0, br=0.25),
interleave_blocks(
types=('bottle', 'self_attn'), d=4, c=512, s=2, gs=0, br=0.25,
self_attn_layer='halo', self_attn_kwargs=dict(halo_size=3)),
interleave_blocks(
types=('bottle', 'self_attn'), d=6, c=1024, s=2, gs=0, br=0.25,
self_attn_layer='halo', self_attn_kwargs=dict(halo_size=3)),
interleave_blocks(
types=('bottle', 'self_attn'), d=3, c=2048, s=2, gs=0, br=0.25,
self_attn_layer='bottleneck', self_attn_kwargs=dict()),
),
stem_chs=64,
stem_type='tiered',
stem_pool='',
act_layer='silu',
),
)
def _create_byoanet(variant, cfg_variant=None, pretrained=False, **kwargs):
return build_model_with_cfg(
ByobNet, variant, pretrained,
model_cfg=model_cfgs[variant] if not cfg_variant else model_cfgs[cfg_variant],
feature_cfg=dict(flatten_sequential=True),
**kwargs,
)
def _cfg(url='', **kwargs):
return {
'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': 0.95, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'stem.conv1.conv', 'classifier': 'head.fc',
'fixed_input_size': False, 'min_input_size': (3, 224, 224),
**kwargs
}
default_cfgs = generate_default_cfgs({
# GPU-Efficient (ResNet) weights
'botnet26t_256.c1_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/botnet26t_c1_256-167a0e9f.pth',
hf_hub_id='timm/',
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)),
'sebotnet33ts_256.a1h_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/sebotnet33ts_a1h2_256-957e3c3e.pth',
hf_hub_id='timm/',
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94),
'botnet50ts_256.untrained': _cfg(
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)),
'eca_botnext26ts_256.c1_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/eca_botnext26ts_c_256-95a898f6.pth',
hf_hub_id='timm/',
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)),
'halonet_h1.untrained': _cfg(input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)),
'halonet26t.a1h_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/halonet26t_a1h_256-3083328c.pth',
hf_hub_id='timm/',
input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256)),
'sehalonet33ts.ra2_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/sehalonet33ts_256-87e053f9.pth',
hf_hub_id='timm/',
input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256), crop_pct=0.94),
'halonet50ts.a1h_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/halonet50ts_a1h2_256-f3a3daee.pth',
hf_hub_id='timm/',
input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256), crop_pct=0.94),
'eca_halonext26ts.c1_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/eca_halonext26ts_c_256-06906299.pth',
hf_hub_id='timm/',
input_size=(3, 256, 256), pool_size=(8, 8), min_input_size=(3, 256, 256), crop_pct=0.94),
'lambda_resnet26t.c1_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lambda_resnet26t_c_256-e5a5c857.pth',
hf_hub_id='timm/',
min_input_size=(3, 128, 128), input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94),
'lambda_resnet50ts.a1h_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lambda_resnet50ts_a1h_256-b87370f7.pth',
hf_hub_id='timm/',
min_input_size=(3, 128, 128), input_size=(3, 256, 256), pool_size=(8, 8)),
'lambda_resnet26rpt_256.c1_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lambda_resnet26rpt_c_256-ab00292d.pth',
hf_hub_id='timm/',
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8), crop_pct=0.94),
'haloregnetz_b.ra3_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/haloregnetz_c_raa_256-c8ad7616.pth',
hf_hub_id='timm/',
mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5),
first_conv='stem.conv', input_size=(3, 224, 224), pool_size=(7, 7), min_input_size=(3, 224, 224), crop_pct=0.94),
'lamhalobotnet50ts_256.a1h_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/lamhalobotnet50ts_a1h2_256-fe3d9445.pth',
hf_hub_id='timm/',
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)),
'halo2botnet50ts_256.a1h_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-attn-weights/halo2botnet50ts_a1h2_256-fd9c11a3.pth',
hf_hub_id='timm/',
fixed_input_size=True, input_size=(3, 256, 256), pool_size=(8, 8)),
})
@register_model
def botnet26t_256(pretrained=False, **kwargs) -> ByobNet:
""" Bottleneck Transformer w/ ResNet26-T backbone.
"""
kwargs.setdefault('img_size', 256)
return _create_byoanet('botnet26t_256', 'botnet26t', pretrained=pretrained, **kwargs)
@register_model
def sebotnet33ts_256(pretrained=False, **kwargs) -> ByobNet:
""" Bottleneck Transformer w/ a ResNet33-t backbone, SE attn for non Halo blocks, SiLU,
"""
return _create_byoanet('sebotnet33ts_256', 'sebotnet33ts', pretrained=pretrained, **kwargs)
@register_model
def botnet50ts_256(pretrained=False, **kwargs) -> ByobNet:
""" Bottleneck Transformer w/ ResNet50-T backbone, silu act.
"""
kwargs.setdefault('img_size', 256)
return _create_byoanet('botnet50ts_256', 'botnet50ts', pretrained=pretrained, **kwargs)
@register_model
def eca_botnext26ts_256(pretrained=False, **kwargs) -> ByobNet:
""" Bottleneck Transformer w/ ResNet26-T backbone, silu act.
"""
kwargs.setdefault('img_size', 256)
return _create_byoanet('eca_botnext26ts_256', 'eca_botnext26ts', pretrained=pretrained, **kwargs)
@register_model
def halonet_h1(pretrained=False, **kwargs) -> ByobNet:
""" HaloNet-H1. Halo attention in all stages as per the paper.
NOTE: This runs very slowly!
"""
return _create_byoanet('halonet_h1', pretrained=pretrained, **kwargs)
@register_model
def halonet26t(pretrained=False, **kwargs) -> ByobNet:
""" HaloNet w/ a ResNet26-t backbone. Halo attention in final two stages
"""
return _create_byoanet('halonet26t', pretrained=pretrained, **kwargs)
@register_model
def sehalonet33ts(pretrained=False, **kwargs) -> ByobNet:
""" HaloNet w/ a ResNet33-t backbone, SE attn for non Halo blocks, SiLU, 1-2 Halo in stage 2,3,4.
"""
return _create_byoanet('sehalonet33ts', pretrained=pretrained, **kwargs)
@register_model
def halonet50ts(pretrained=False, **kwargs) -> ByobNet:
""" HaloNet w/ a ResNet50-t backbone, silu act. Halo attention in final two stages
"""
return _create_byoanet('halonet50ts', pretrained=pretrained, **kwargs)
@register_model
def eca_halonext26ts(pretrained=False, **kwargs) -> ByobNet:
""" HaloNet w/ a ResNet26-t backbone, silu act. Halo attention in final two stages
"""
return _create_byoanet('eca_halonext26ts', pretrained=pretrained, **kwargs)
@register_model
def lambda_resnet26t(pretrained=False, **kwargs) -> ByobNet:
""" Lambda-ResNet-26-T. Lambda layers w/ conv pos in last two stages.
"""
return _create_byoanet('lambda_resnet26t', pretrained=pretrained, **kwargs)
@register_model
def lambda_resnet50ts(pretrained=False, **kwargs) -> ByobNet:
""" Lambda-ResNet-50-TS. SiLU act. Lambda layers w/ conv pos in last two stages.
"""
return _create_byoanet('lambda_resnet50ts', pretrained=pretrained, **kwargs)
@register_model
def lambda_resnet26rpt_256(pretrained=False, **kwargs) -> ByobNet:
""" Lambda-ResNet-26-R-T. Lambda layers w/ rel pos embed in last two stages.
"""
kwargs.setdefault('img_size', 256)
return _create_byoanet('lambda_resnet26rpt_256', pretrained=pretrained, **kwargs)
@register_model
def haloregnetz_b(pretrained=False, **kwargs) -> ByobNet:
""" Halo + RegNetZ
"""
return _create_byoanet('haloregnetz_b', pretrained=pretrained, **kwargs)
@register_model
def lamhalobotnet50ts_256(pretrained=False, **kwargs) -> ByobNet:
""" Combo Attention (Lambda + Halo + Bot) Network
"""
return _create_byoanet('lamhalobotnet50ts_256', 'lamhalobotnet50ts', pretrained=pretrained, **kwargs)
@register_model
def halo2botnet50ts_256(pretrained=False, **kwargs) -> ByobNet:
""" Combo Attention (Halo + Halo + Bot) Network
"""
return _create_byoanet('halo2botnet50ts_256', 'halo2botnet50ts', pretrained=pretrained, **kwargs)
| pytorch-image-models/timm/models/byoanet.py/0 | {
"file_path": "pytorch-image-models/timm/models/byoanet.py",
"repo_id": "pytorch-image-models",
"token_count": 9703
} |
""" EfficientFormer-V2
@article{
li2022rethinking,
title={Rethinking Vision Transformers for MobileNet Size and Speed},
author={Li, Yanyu and Hu, Ju and Wen, Yang and Evangelidis, Georgios and Salahi, Kamyar and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian},
journal={arXiv preprint arXiv:2212.08059},
year={2022}
}
Significantly refactored and cleaned up for timm from original at: https://github.com/snap-research/EfficientFormer
Original code licensed Apache 2.0, Copyright (c) 2022 Snap Inc.
Modifications and timm support by / Copyright 2023, Ross Wightman
"""
import math
from functools import partial
from typing import Dict, Optional
import torch
import torch.nn as nn
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import create_conv2d, create_norm_layer, get_act_layer, get_norm_layer, ConvNormAct
from timm.layers import DropPath, trunc_normal_, to_2tuple, to_ntuple, ndgrid
from ._builder import build_model_with_cfg
from ._manipulate import checkpoint_seq
from ._registry import generate_default_cfgs, register_model
__all__ = ['EfficientFormerV2']
EfficientFormer_width = {
'L': (40, 80, 192, 384), # 26m 83.3% 6attn
'S2': (32, 64, 144, 288), # 12m 81.6% 4attn dp0.02
'S1': (32, 48, 120, 224), # 6.1m 79.0
'S0': (32, 48, 96, 176), # 75.0 75.7
}
EfficientFormer_depth = {
'L': (5, 5, 15, 10), # 26m 83.3%
'S2': (4, 4, 12, 8), # 12m
'S1': (3, 3, 9, 6), # 79.0
'S0': (2, 2, 6, 4), # 75.7
}
EfficientFormer_expansion_ratios = {
'L': (4, 4, (4, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4), (4, 4, 4, 3, 3, 3, 3, 4, 4, 4)),
'S2': (4, 4, (4, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4), (4, 4, 3, 3, 3, 3, 4, 4)),
'S1': (4, 4, (4, 4, 3, 3, 3, 3, 4, 4, 4), (4, 4, 3, 3, 4, 4)),
'S0': (4, 4, (4, 3, 3, 3, 4, 4), (4, 3, 3, 4)),
}
class ConvNorm(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size=1,
stride=1,
padding='',
dilation=1,
groups=1,
bias=True,
norm_layer='batchnorm2d',
norm_kwargs=None,
):
norm_kwargs = norm_kwargs or {}
super(ConvNorm, self).__init__()
self.conv = create_conv2d(
in_channels,
out_channels,
kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
bias=bias,
)
self.bn = create_norm_layer(norm_layer, out_channels, **norm_kwargs)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return x
class Attention2d(torch.nn.Module):
attention_bias_cache: Dict[str, torch.Tensor]
def __init__(
self,
dim=384,
key_dim=32,
num_heads=8,
attn_ratio=4,
resolution=7,
act_layer=nn.GELU,
stride=None,
):
super().__init__()
self.num_heads = num_heads
self.scale = key_dim ** -0.5
self.key_dim = key_dim
resolution = to_2tuple(resolution)
if stride is not None:
resolution = tuple([math.ceil(r / stride) for r in resolution])
self.stride_conv = ConvNorm(dim, dim, kernel_size=3, stride=stride, groups=dim)
self.upsample = nn.Upsample(scale_factor=stride, mode='bilinear')
else:
self.stride_conv = None
self.upsample = None
self.resolution = resolution
self.N = self.resolution[0] * self.resolution[1]
self.d = int(attn_ratio * key_dim)
self.dh = int(attn_ratio * key_dim) * num_heads
self.attn_ratio = attn_ratio
kh = self.key_dim * self.num_heads
self.q = ConvNorm(dim, kh)
self.k = ConvNorm(dim, kh)
self.v = ConvNorm(dim, self.dh)
self.v_local = ConvNorm(self.dh, self.dh, kernel_size=3, groups=self.dh)
self.talking_head1 = nn.Conv2d(self.num_heads, self.num_heads, kernel_size=1)
self.talking_head2 = nn.Conv2d(self.num_heads, self.num_heads, kernel_size=1)
self.act = act_layer()
self.proj = ConvNorm(self.dh, dim, 1)
pos = torch.stack(ndgrid(torch.arange(self.resolution[0]), torch.arange(self.resolution[1]))).flatten(1)
rel_pos = (pos[..., :, None] - pos[..., None, :]).abs()
rel_pos = (rel_pos[0] * self.resolution[1]) + rel_pos[1]
self.attention_biases = torch.nn.Parameter(torch.zeros(num_heads, self.N))
self.register_buffer('attention_bias_idxs', torch.LongTensor(rel_pos), persistent=False)
self.attention_bias_cache = {} # per-device attention_biases cache (data-parallel compat)
@torch.no_grad()
def train(self, mode=True):
super().train(mode)
if mode and self.attention_bias_cache:
self.attention_bias_cache = {} # clear ab cache
def get_attention_biases(self, device: torch.device) -> torch.Tensor:
if torch.jit.is_tracing() or self.training:
return self.attention_biases[:, self.attention_bias_idxs]
else:
device_key = str(device)
if device_key not in self.attention_bias_cache:
self.attention_bias_cache[device_key] = self.attention_biases[:, self.attention_bias_idxs]
return self.attention_bias_cache[device_key]
def forward(self, x):
B, C, H, W = x.shape
if self.stride_conv is not None:
x = self.stride_conv(x)
q = self.q(x).reshape(B, self.num_heads, -1, self.N).permute(0, 1, 3, 2)
k = self.k(x).reshape(B, self.num_heads, -1, self.N).permute(0, 1, 2, 3)
v = self.v(x)
v_local = self.v_local(v)
v = v.reshape(B, self.num_heads, -1, self.N).permute(0, 1, 3, 2)
attn = (q @ k) * self.scale
attn = attn + self.get_attention_biases(x.device)
attn = self.talking_head1(attn)
attn = attn.softmax(dim=-1)
attn = self.talking_head2(attn)
x = (attn @ v).transpose(2, 3)
x = x.reshape(B, self.dh, self.resolution[0], self.resolution[1]) + v_local
if self.upsample is not None:
x = self.upsample(x)
x = self.act(x)
x = self.proj(x)
return x
class LocalGlobalQuery(torch.nn.Module):
def __init__(self, in_dim, out_dim):
super().__init__()
self.pool = nn.AvgPool2d(1, 2, 0)
self.local = nn.Conv2d(in_dim, in_dim, kernel_size=3, stride=2, padding=1, groups=in_dim)
self.proj = ConvNorm(in_dim, out_dim, 1)
def forward(self, x):
local_q = self.local(x)
pool_q = self.pool(x)
q = local_q + pool_q
q = self.proj(q)
return q
class Attention2dDownsample(torch.nn.Module):
attention_bias_cache: Dict[str, torch.Tensor]
def __init__(
self,
dim=384,
key_dim=16,
num_heads=8,
attn_ratio=4,
resolution=7,
out_dim=None,
act_layer=nn.GELU,
):
super().__init__()
self.num_heads = num_heads
self.scale = key_dim ** -0.5
self.key_dim = key_dim
self.resolution = to_2tuple(resolution)
self.resolution2 = tuple([math.ceil(r / 2) for r in self.resolution])
self.N = self.resolution[0] * self.resolution[1]
self.N2 = self.resolution2[0] * self.resolution2[1]
self.d = int(attn_ratio * key_dim)
self.dh = int(attn_ratio * key_dim) * num_heads
self.attn_ratio = attn_ratio
self.out_dim = out_dim or dim
kh = self.key_dim * self.num_heads
self.q = LocalGlobalQuery(dim, kh)
self.k = ConvNorm(dim, kh, 1)
self.v = ConvNorm(dim, self.dh, 1)
self.v_local = ConvNorm(self.dh, self.dh, kernel_size=3, stride=2, groups=self.dh)
self.act = act_layer()
self.proj = ConvNorm(self.dh, self.out_dim, 1)
self.attention_biases = nn.Parameter(torch.zeros(num_heads, self.N))
k_pos = torch.stack(ndgrid(torch.arange(self.resolution[0]), torch.arange(self.resolution[1]))).flatten(1)
q_pos = torch.stack(ndgrid(
torch.arange(0, self.resolution[0], step=2),
torch.arange(0, self.resolution[1], step=2)
)).flatten(1)
rel_pos = (q_pos[..., :, None] - k_pos[..., None, :]).abs()
rel_pos = (rel_pos[0] * self.resolution[1]) + rel_pos[1]
self.register_buffer('attention_bias_idxs', rel_pos, persistent=False)
self.attention_bias_cache = {} # per-device attention_biases cache (data-parallel compat)
@torch.no_grad()
def train(self, mode=True):
super().train(mode)
if mode and self.attention_bias_cache:
self.attention_bias_cache = {} # clear ab cache
def get_attention_biases(self, device: torch.device) -> torch.Tensor:
if torch.jit.is_tracing() or self.training:
return self.attention_biases[:, self.attention_bias_idxs]
else:
device_key = str(device)
if device_key not in self.attention_bias_cache:
self.attention_bias_cache[device_key] = self.attention_biases[:, self.attention_bias_idxs]
return self.attention_bias_cache[device_key]
def forward(self, x):
B, C, H, W = x.shape
q = self.q(x).reshape(B, self.num_heads, -1, self.N2).permute(0, 1, 3, 2)
k = self.k(x).reshape(B, self.num_heads, -1, self.N).permute(0, 1, 2, 3)
v = self.v(x)
v_local = self.v_local(v)
v = v.reshape(B, self.num_heads, -1, self.N).permute(0, 1, 3, 2)
attn = (q @ k) * self.scale
attn = attn + self.get_attention_biases(x.device)
attn = attn.softmax(dim=-1)
x = (attn @ v).transpose(2, 3)
x = x.reshape(B, self.dh, self.resolution2[0], self.resolution2[1]) + v_local
x = self.act(x)
x = self.proj(x)
return x
class Downsample(nn.Module):
def __init__(
self,
in_chs,
out_chs,
kernel_size=3,
stride=2,
padding=1,
resolution=7,
use_attn=False,
act_layer=nn.GELU,
norm_layer=nn.BatchNorm2d,
):
super().__init__()
kernel_size = to_2tuple(kernel_size)
stride = to_2tuple(stride)
padding = to_2tuple(padding)
norm_layer = norm_layer or nn.Identity()
self.conv = ConvNorm(
in_chs,
out_chs,
kernel_size=kernel_size,
stride=stride,
padding=padding,
norm_layer=norm_layer,
)
if use_attn:
self.attn = Attention2dDownsample(
dim=in_chs,
out_dim=out_chs,
resolution=resolution,
act_layer=act_layer,
)
else:
self.attn = None
def forward(self, x):
out = self.conv(x)
if self.attn is not None:
return self.attn(x) + out
return out
class ConvMlpWithNorm(nn.Module):
"""
Implementation of MLP with 1*1 convolutions.
Input: tensor with shape [B, C, H, W]
"""
def __init__(
self,
in_features,
hidden_features=None,
out_features=None,
act_layer=nn.GELU,
norm_layer=nn.BatchNorm2d,
drop=0.,
mid_conv=False,
):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = ConvNormAct(
in_features, hidden_features, 1,
bias=True, norm_layer=norm_layer, act_layer=act_layer)
if mid_conv:
self.mid = ConvNormAct(
hidden_features, hidden_features, 3,
groups=hidden_features, bias=True, norm_layer=norm_layer, act_layer=act_layer)
else:
self.mid = nn.Identity()
self.drop1 = nn.Dropout(drop)
self.fc2 = ConvNorm(hidden_features, out_features, 1, norm_layer=norm_layer)
self.drop2 = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.mid(x)
x = self.drop1(x)
x = self.fc2(x)
x = self.drop2(x)
return x
class LayerScale2d(nn.Module):
def __init__(self, dim, init_values=1e-5, inplace=False):
super().__init__()
self.inplace = inplace
self.gamma = nn.Parameter(init_values * torch.ones(dim))
def forward(self, x):
gamma = self.gamma.view(1, -1, 1, 1)
return x.mul_(gamma) if self.inplace else x * gamma
class EfficientFormerV2Block(nn.Module):
def __init__(
self,
dim,
mlp_ratio=4.,
act_layer=nn.GELU,
norm_layer=nn.BatchNorm2d,
proj_drop=0.,
drop_path=0.,
layer_scale_init_value=1e-5,
resolution=7,
stride=None,
use_attn=True,
):
super().__init__()
if use_attn:
self.token_mixer = Attention2d(
dim,
resolution=resolution,
act_layer=act_layer,
stride=stride,
)
self.ls1 = LayerScale2d(
dim, layer_scale_init_value) if layer_scale_init_value is not None else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
else:
self.token_mixer = None
self.ls1 = None
self.drop_path1 = None
self.mlp = ConvMlpWithNorm(
in_features=dim,
hidden_features=int(dim * mlp_ratio),
act_layer=act_layer,
norm_layer=norm_layer,
drop=proj_drop,
mid_conv=True,
)
self.ls2 = LayerScale2d(
dim, layer_scale_init_value) if layer_scale_init_value is not None else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def forward(self, x):
if self.token_mixer is not None:
x = x + self.drop_path1(self.ls1(self.token_mixer(x)))
x = x + self.drop_path2(self.ls2(self.mlp(x)))
return x
class Stem4(nn.Sequential):
def __init__(self, in_chs, out_chs, act_layer=nn.GELU, norm_layer=nn.BatchNorm2d):
super().__init__()
self.stride = 4
self.conv1 = ConvNormAct(
in_chs, out_chs // 2, kernel_size=3, stride=2, padding=1, bias=True,
norm_layer=norm_layer, act_layer=act_layer
)
self.conv2 = ConvNormAct(
out_chs // 2, out_chs, kernel_size=3, stride=2, padding=1, bias=True,
norm_layer=norm_layer, act_layer=act_layer
)
class EfficientFormerV2Stage(nn.Module):
def __init__(
self,
dim,
dim_out,
depth,
resolution=7,
downsample=True,
block_stride=None,
downsample_use_attn=False,
block_use_attn=False,
num_vit=1,
mlp_ratio=4.,
proj_drop=.0,
drop_path=0.,
layer_scale_init_value=1e-5,
act_layer=nn.GELU,
norm_layer=nn.BatchNorm2d,
):
super().__init__()
self.grad_checkpointing = False
mlp_ratio = to_ntuple(depth)(mlp_ratio)
resolution = to_2tuple(resolution)
if downsample:
self.downsample = Downsample(
dim,
dim_out,
use_attn=downsample_use_attn,
resolution=resolution,
norm_layer=norm_layer,
act_layer=act_layer,
)
dim = dim_out
resolution = tuple([math.ceil(r / 2) for r in resolution])
else:
assert dim == dim_out
self.downsample = nn.Identity()
blocks = []
for block_idx in range(depth):
remain_idx = depth - num_vit - 1
b = EfficientFormerV2Block(
dim,
resolution=resolution,
stride=block_stride,
mlp_ratio=mlp_ratio[block_idx],
use_attn=block_use_attn and block_idx > remain_idx,
proj_drop=proj_drop,
drop_path=drop_path[block_idx],
layer_scale_init_value=layer_scale_init_value,
act_layer=act_layer,
norm_layer=norm_layer,
)
blocks += [b]
self.blocks = nn.Sequential(*blocks)
def forward(self, x):
x = self.downsample(x)
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint_seq(self.blocks, x)
else:
x = self.blocks(x)
return x
class EfficientFormerV2(nn.Module):
def __init__(
self,
depths,
in_chans=3,
img_size=224,
global_pool='avg',
embed_dims=None,
downsamples=None,
mlp_ratios=4,
norm_layer='batchnorm2d',
norm_eps=1e-5,
act_layer='gelu',
num_classes=1000,
drop_rate=0.,
proj_drop_rate=0.,
drop_path_rate=0.,
layer_scale_init_value=1e-5,
num_vit=0,
distillation=True,
):
super().__init__()
assert global_pool in ('avg', '')
self.num_classes = num_classes
self.global_pool = global_pool
self.feature_info = []
img_size = to_2tuple(img_size)
norm_layer = partial(get_norm_layer(norm_layer), eps=norm_eps)
act_layer = get_act_layer(act_layer)
self.stem = Stem4(in_chans, embed_dims[0], act_layer=act_layer, norm_layer=norm_layer)
prev_dim = embed_dims[0]
stride = 4
num_stages = len(depths)
dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)]
downsamples = downsamples or (False,) + (True,) * (len(depths) - 1)
mlp_ratios = to_ntuple(num_stages)(mlp_ratios)
stages = []
for i in range(num_stages):
curr_resolution = tuple([math.ceil(s / stride) for s in img_size])
stage = EfficientFormerV2Stage(
prev_dim,
embed_dims[i],
depth=depths[i],
resolution=curr_resolution,
downsample=downsamples[i],
block_stride=2 if i == 2 else None,
downsample_use_attn=i >= 3,
block_use_attn=i >= 2,
num_vit=num_vit,
mlp_ratio=mlp_ratios[i],
proj_drop=proj_drop_rate,
drop_path=dpr[i],
layer_scale_init_value=layer_scale_init_value,
act_layer=act_layer,
norm_layer=norm_layer,
)
if downsamples[i]:
stride *= 2
prev_dim = embed_dims[i]
self.feature_info += [dict(num_chs=prev_dim, reduction=stride, module=f'stages.{i}')]
stages.append(stage)
self.stages = nn.Sequential(*stages)
# Classifier head
self.num_features = self.head_hidden_size = embed_dims[-1]
self.norm = norm_layer(embed_dims[-1])
self.head_drop = nn.Dropout(drop_rate)
self.head = nn.Linear(embed_dims[-1], num_classes) if num_classes > 0 else nn.Identity()
self.dist = distillation
if self.dist:
self.head_dist = nn.Linear(embed_dims[-1], num_classes) if num_classes > 0 else nn.Identity()
else:
self.head_dist = None
self.apply(self.init_weights)
self.distilled_training = False
# init for classification
def init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
@torch.jit.ignore
def no_weight_decay(self):
return {k for k, _ in self.named_parameters() if 'attention_biases' in k}
@torch.jit.ignore
def group_matcher(self, coarse=False):
matcher = dict(
stem=r'^stem', # stem and embed
blocks=[(r'^stages\.(\d+)', None), (r'^norm', (99999,))]
)
return matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
for s in self.stages:
s.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head, self.head_dist
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
if global_pool is not None:
self.global_pool = global_pool
self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
self.head_dist = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
@torch.jit.ignore
def set_distilled_training(self, enable=True):
self.distilled_training = enable
def forward_features(self, x):
x = self.stem(x)
x = self.stages(x)
x = self.norm(x)
return x
def forward_head(self, x, pre_logits: bool = False):
if self.global_pool == 'avg':
x = x.mean(dim=(2, 3))
x = self.head_drop(x)
if pre_logits:
return x
x, x_dist = self.head(x), self.head_dist(x)
if self.distilled_training and self.training and not torch.jit.is_scripting():
# only return separate classification predictions when training in distilled mode
return x, x_dist
else:
# during standard train/finetune, inference average the classifier predictions
return (x + x_dist) / 2
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, 'fixed_input_size': True,
'crop_pct': .95, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'classifier': ('head', 'head_dist'), 'first_conv': 'stem.conv1.conv',
**kwargs
}
default_cfgs = generate_default_cfgs({
'efficientformerv2_s0.snap_dist_in1k': _cfg(
hf_hub_id='timm/',
),
'efficientformerv2_s1.snap_dist_in1k': _cfg(
hf_hub_id='timm/',
),
'efficientformerv2_s2.snap_dist_in1k': _cfg(
hf_hub_id='timm/',
),
'efficientformerv2_l.snap_dist_in1k': _cfg(
hf_hub_id='timm/',
),
})
def _create_efficientformerv2(variant, pretrained=False, **kwargs):
out_indices = kwargs.pop('out_indices', (0, 1, 2, 3))
model = build_model_with_cfg(
EfficientFormerV2, variant, pretrained,
feature_cfg=dict(flatten_sequential=True, out_indices=out_indices),
**kwargs)
return model
@register_model
def efficientformerv2_s0(pretrained=False, **kwargs) -> EfficientFormerV2:
model_args = dict(
depths=EfficientFormer_depth['S0'],
embed_dims=EfficientFormer_width['S0'],
num_vit=2,
drop_path_rate=0.0,
mlp_ratios=EfficientFormer_expansion_ratios['S0'],
)
return _create_efficientformerv2('efficientformerv2_s0', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def efficientformerv2_s1(pretrained=False, **kwargs) -> EfficientFormerV2:
model_args = dict(
depths=EfficientFormer_depth['S1'],
embed_dims=EfficientFormer_width['S1'],
num_vit=2,
drop_path_rate=0.0,
mlp_ratios=EfficientFormer_expansion_ratios['S1'],
)
return _create_efficientformerv2('efficientformerv2_s1', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def efficientformerv2_s2(pretrained=False, **kwargs) -> EfficientFormerV2:
model_args = dict(
depths=EfficientFormer_depth['S2'],
embed_dims=EfficientFormer_width['S2'],
num_vit=4,
drop_path_rate=0.02,
mlp_ratios=EfficientFormer_expansion_ratios['S2'],
)
return _create_efficientformerv2('efficientformerv2_s2', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def efficientformerv2_l(pretrained=False, **kwargs) -> EfficientFormerV2:
model_args = dict(
depths=EfficientFormer_depth['L'],
embed_dims=EfficientFormer_width['L'],
num_vit=6,
drop_path_rate=0.1,
mlp_ratios=EfficientFormer_expansion_ratios['L'],
)
return _create_efficientformerv2('efficientformerv2_l', pretrained=pretrained, **dict(model_args, **kwargs))
| pytorch-image-models/timm/models/efficientformer_v2.py/0 | {
"file_path": "pytorch-image-models/timm/models/efficientformer_v2.py",
"repo_id": "pytorch-image-models",
"token_count": 12757
} |
import math
from copy import deepcopy
from functools import partial
from typing import Callable, Dict, List, Optional, Tuple, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.jit import Final
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import PatchEmbed, Mlp, DropPath, ClNormMlpClassifierHead, LayerScale, \
get_norm_layer, get_act_layer, init_weight_jax, init_weight_vit, to_2tuple, use_fused_attn
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._manipulate import named_apply, checkpoint_seq, adapt_input_conv
from ._registry import generate_default_cfgs, register_model, register_model_deprecations
def window_partition(x, window_size: Tuple[int, int]):
"""
Partition into non-overlapping windows with padding if needed.
Args:
x (tensor): input tokens with [B, H, W, C].
window_size (int): window size.
Returns:
windows: windows after partition with [B * num_windows, window_size, window_size, C].
(Hp, Wp): padded height and width before partition
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
return windows
def window_unpartition(windows: torch.Tensor, window_size: Tuple[int, int], hw: Tuple[int, int]):
"""
Window unpartition into original sequences and removing padding.
Args:
x (tensor): input tokens with [B * num_windows, window_size, window_size, C].
window_size (int): window size.
hw (Tuple): original height and width (H, W) before padding.
Returns:
x: unpartitioned sequences with [B, H, W, C].
"""
H, W = hw
B = windows.shape[0] // (H * W // window_size[0] // window_size[1])
x = windows.view(B, H // window_size[0], W // window_size[1], window_size[0], window_size[1], -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
def _calc_pad(H: int, W: int, window_size: Tuple[int, int]) -> Tuple[int, int, int, int]:
pad_h = (window_size[0] - H % window_size[0]) % window_size[0]
pad_w = (window_size[1] - W % window_size[1]) % window_size[1]
Hp, Wp = H + pad_h, W + pad_w
return Hp, Wp, pad_h, pad_w
class MultiScaleAttention(nn.Module):
fused_attn: torch.jit.Final[bool]
def __init__(
self,
dim: int,
dim_out: int,
num_heads: int,
q_pool: nn.Module = None,
):
super().__init__()
self.dim = dim
self.dim_out = dim_out
self.num_heads = num_heads
head_dim = dim_out // num_heads
self.scale = head_dim ** -0.5
self.fused_attn = use_fused_attn()
self.q_pool = q_pool
self.qkv = nn.Linear(dim, dim_out * 3)
self.proj = nn.Linear(dim_out, dim_out)
def forward(self, x: torch.Tensor) -> torch.Tensor:
B, H, W, _ = x.shape
# qkv with shape (B, H * W, 3, nHead, C)
qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1)
# q, k, v with shape (B, H * W, nheads, C)
q, k, v = torch.unbind(qkv, 2)
# Q pooling (for downsample at stage changes)
if self.q_pool is not None:
q = q.reshape(B, H, W, -1).permute(0, 3, 1, 2) # to BCHW for pool
q = self.q_pool(q).permute(0, 2, 3, 1)
H, W = q.shape[1:3] # downsampled shape
q = q.reshape(B, H * W, self.num_heads, -1)
# Torch's SDPA expects [B, nheads, H*W, C] so we transpose
q = q.transpose(1, 2)
k = k.transpose(1, 2)
v = v.transpose(1, 2)
if self.fused_attn:
x = F.scaled_dot_product_attention(q, k, v)
else:
q = q * self.scale
attn = q @ k.transpose(-1, -2)
attn = attn.softmax(dim=-1)
x = attn @ v
# Transpose back
x = x.transpose(1, 2).reshape(B, H, W, -1)
x = self.proj(x)
return x
class MultiScaleBlock(nn.Module):
def __init__(
self,
dim: int,
dim_out: int,
num_heads: int,
mlp_ratio: float = 4.0,
q_stride: Optional[Tuple[int, int]] = None,
norm_layer: Union[nn.Module, str] = "LayerNorm",
act_layer: Union[nn.Module, str] = "GELU",
window_size: int = 0,
init_values: Optional[float] = None,
drop_path: float = 0.0,
):
super().__init__()
norm_layer = get_norm_layer(norm_layer)
act_layer = get_act_layer(act_layer)
self.window_size = to_2tuple(window_size)
self.is_windowed = any(self.window_size)
self.dim = dim
self.dim_out = dim_out
self.q_stride = q_stride
if dim != dim_out:
self.proj = nn.Linear(dim, dim_out)
else:
self.proj = nn.Identity()
self.pool = None
if self.q_stride:
# note make a different instance for this Module so that it's not shared with attn module
self.pool = nn.MaxPool2d(
kernel_size=q_stride,
stride=q_stride,
ceil_mode=False,
)
self.norm1 = norm_layer(dim)
self.attn = MultiScaleAttention(
dim,
dim_out,
num_heads=num_heads,
q_pool=deepcopy(self.pool),
)
self.ls1 = LayerScale(dim_out, init_values) if init_values is not None else nn.Identity()
self.drop_path1 = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
self.norm2 = norm_layer(dim_out)
self.mlp = Mlp(
dim_out,
int(dim_out * mlp_ratio),
act_layer=act_layer,
)
self.ls2 = LayerScale(dim_out, init_values) if init_values is not None else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
def forward(self, x: torch.Tensor) -> torch.Tensor:
shortcut = x # B, H, W, C
x = self.norm1(x)
# Skip connection
if self.dim != self.dim_out:
shortcut = self.proj(x)
if self.pool is not None:
shortcut = shortcut.permute(0, 3, 1, 2)
shortcut = self.pool(shortcut).permute(0, 2, 3, 1)
# Window partition
window_size = self.window_size
H, W = x.shape[1:3]
Hp, Wp = H, W # keep torchscript happy
if self.is_windowed:
Hp, Wp, pad_h, pad_w = _calc_pad(H, W, window_size)
x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h))
x = window_partition(x, window_size)
# Window Attention + Q Pooling (if stage change)
x = self.attn(x)
if self.q_stride is not None:
# Shapes have changed due to Q pooling
window_size = (self.window_size[0] // self.q_stride[0], self.window_size[1] // self.q_stride[1])
H, W = shortcut.shape[1:3]
Hp, Wp, pad_h, pad_w = _calc_pad(H, W, window_size)
# Reverse window partition
if self.is_windowed:
x = window_unpartition(x, window_size, (Hp, Wp))
x = x[:, :H, :W, :].contiguous() # unpad
x = shortcut + self.drop_path1(self.ls1(x))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
class HieraPatchEmbed(nn.Module):
"""
Image to Patch Embedding.
"""
def __init__(
self,
kernel_size: Tuple[int, ...] = (7, 7),
stride: Tuple[int, ...] = (4, 4),
padding: Tuple[int, ...] = (3, 3),
in_chans: int = 3,
embed_dim: int = 768,
):
"""
Args:
kernel_size (Tuple): kernel size of the projection layer.
stride (Tuple): stride of the projection layer.
padding (Tuple): padding size of the projection layer.
in_chans (int): Number of input image channels.
embed_dim (int): embed_dim (int): Patch embedding dimension.
"""
super().__init__()
self.proj = nn.Conv2d(
in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.proj(x)
# B C H W -> B H W C
x = x.permute(0, 2, 3, 1)
return x
class HieraDet(nn.Module):
"""
Reference: https://arxiv.org/abs/2306.00989
"""
def __init__(
self,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: str = 'avg',
embed_dim: int = 96, # initial embed dim
num_heads: int = 1, # initial number of heads
patch_kernel: Tuple[int, ...] = (7, 7),
patch_stride: Tuple[int, ...] = (4, 4),
patch_padding: Tuple[int, ...] = (3, 3),
patch_size: Optional[Tuple[int, ...]] = None,
q_pool: int = 3, # number of q_pool stages
q_stride: Tuple[int, int] = (2, 2), # downsample stride bet. stages
stages: Tuple[int, ...] = (2, 3, 16, 3), # blocks per stage
dim_mul: float = 2.0, # dim_mul factor at stage shift
head_mul: float = 2.0, # head_mul factor at stage shift
global_pos_size: Tuple[int, int] = (7, 7),
# window size per stage, when not using global att.
window_spec: Tuple[int, ...] = (
8,
4,
14,
7,
),
# global attn in these blocks
global_att_blocks: Tuple[int, ...] = (
12,
16,
20,
),
init_values: Optional[float] = None,
weight_init: str = '',
fix_init: bool = True,
head_init_scale: float = 0.001,
drop_rate: float = 0.0,
drop_path_rate: float = 0.0, # stochastic depth
norm_layer: Union[nn.Module, str] = "LayerNorm",
act_layer: Union[nn.Module, str] = "GELU",
):
super().__init__()
norm_layer = get_norm_layer(norm_layer)
act_layer = get_act_layer(act_layer)
assert len(stages) == len(window_spec)
self.num_classes = num_classes
self.window_spec = window_spec
self.output_fmt = 'NHWC'
depth = sum(stages)
self.q_stride = q_stride
self.stage_ends = [sum(stages[:i]) - 1 for i in range(1, len(stages) + 1)]
assert 0 <= q_pool <= len(self.stage_ends[:-1])
self.q_pool_blocks = [x + 1 for x in self.stage_ends[:-1]][:q_pool]
if patch_size is not None:
# use a non-overlapping vit style patch embed
self.patch_embed = PatchEmbed(
img_size=None,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim,
output_fmt='NHWC',
dynamic_img_pad=True,
)
else:
self.patch_embed = HieraPatchEmbed(
kernel_size=patch_kernel,
stride=patch_stride,
padding=patch_padding,
in_chans=in_chans,
embed_dim=embed_dim,
)
# Which blocks have global att?
self.global_att_blocks = global_att_blocks
# Windowed positional embedding (https://arxiv.org/abs/2311.05613)
self.global_pos_size = global_pos_size
self.pos_embed = nn.Parameter(torch.zeros(1, embed_dim, *self.global_pos_size))
self.pos_embed_window = nn.Parameter(torch.zeros(1, embed_dim, self.window_spec[0], self.window_spec[0]))
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
cur_stage = 0
self.blocks = nn.Sequential()
self.feature_info = []
for i in range(depth):
dim_out = embed_dim
# lags by a block, so first block of
# next stage uses an initial window size
# of previous stage and final window size of current stage
window_size = self.window_spec[cur_stage]
if self.global_att_blocks is not None:
window_size = 0 if i in self.global_att_blocks else window_size
if i - 1 in self.stage_ends:
dim_out = int(embed_dim * dim_mul)
num_heads = int(num_heads * head_mul)
cur_stage += 1
block = MultiScaleBlock(
dim=embed_dim,
dim_out=dim_out,
num_heads=num_heads,
drop_path=dpr[i],
q_stride=self.q_stride if i in self.q_pool_blocks else None,
window_size=window_size,
norm_layer=norm_layer,
act_layer=act_layer,
)
embed_dim = dim_out
self.blocks.append(block)
if i in self.stage_ends:
self.feature_info += [
dict(num_chs=dim_out, reduction=2**(cur_stage+2), module=f'blocks.{self.stage_ends[cur_stage]}')]
self.num_features = self.head_hidden_size = embed_dim
self.head = ClNormMlpClassifierHead(
embed_dim,
num_classes,
pool_type=global_pool,
drop_rate=drop_rate,
norm_layer=norm_layer,
)
# Initialize everything
if self.pos_embed is not None:
nn.init.trunc_normal_(self.pos_embed, std=0.02)
if self.pos_embed_window is not None:
nn.init.trunc_normal_(self.pos_embed_window, std=0.02)
if weight_init != 'skip':
init_fn = init_weight_jax if weight_init == 'jax' else init_weight_vit
init_fn = partial(init_fn, classifier_name='head.fc')
named_apply(init_fn, self)
if fix_init:
self.fix_init_weight()
if isinstance(self.head, ClNormMlpClassifierHead) and isinstance(self.head.fc, nn.Linear):
self.head.fc.weight.data.mul_(head_init_scale)
self.head.fc.bias.data.mul_(head_init_scale)
def _pos_embed(self, x: torch.Tensor) -> torch.Tensor:
h, w = x.shape[1:3]
window_embed = self.pos_embed_window
pos_embed = F.interpolate(self.pos_embed, size=(h, w), mode="bicubic")
tile_h = pos_embed.shape[-2] // window_embed.shape[-2]
tile_w = pos_embed.shape[-1] // window_embed.shape[-1]
pos_embed = pos_embed + window_embed.tile((tile_h, tile_w))
pos_embed = pos_embed.permute(0, 2, 3, 1)
return x + pos_embed
def fix_init_weight(self):
def rescale(param, _layer_id):
param.div_(math.sqrt(2.0 * _layer_id))
for layer_id, layer in enumerate(self.blocks):
rescale(layer.attn.proj.weight.data, layer_id + 1)
rescale(layer.mlp.fc2.weight.data, layer_id + 1)
@torch.jit.ignore
def no_weight_decay(self):
return ['pos_embed', 'pos_embed_window']
@torch.jit.ignore
def group_matcher(self, coarse: bool = False) -> Dict:
return dict(
stem=r'^pos_embed|pos_embed_window|patch_embed',
blocks=[(r'^blocks\.(\d+)', None)]
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable: bool = True) -> None:
self.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self):
return self.head.fc
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None, reset_other: bool = False):
self.num_classes = num_classes
self.head.reset(num_classes, pool_type=global_pool, reset_other=reset_other)
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
norm: bool = False,
stop_early: bool = True,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
coarse: bool = True,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
""" Forward features that returns intermediates.
Args:
x: Input image tensor
indices: Take last n blocks if int, all if None, select matching indices if sequence
norm: Apply norm layer to all intermediates
stop_early: Stop iterating over blocks when last desired intermediate hit
output_fmt: Shape of intermediate feature outputs
intermediates_only: Only return intermediate features
coarse: Take coarse features (stage ends) if true, otherwise all block featrures
Returns:
"""
assert not norm, 'normalization of features not supported'
assert output_fmt in ('NCHW', 'NHWC'), 'Output format must be one of NCHW, NHWC.'
if coarse:
take_indices, max_index = feature_take_indices(len(self.stage_ends), indices)
take_indices = [self.stage_ends[i] for i in take_indices]
max_index = self.stage_ends[max_index]
else:
take_indices, max_index = feature_take_indices(len(self.blocks), indices)
x = self.patch_embed(x)
x = self._pos_embed(x)
intermediates = []
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
blocks = self.blocks
else:
blocks = self.blocks[:max_index + 1]
for i, blk in enumerate(blocks):
x = blk(x)
if i in take_indices:
x_out = x.permute(0, 3, 1, 2) if output_fmt == 'NCHW' else x
intermediates.append(x_out)
if intermediates_only:
return intermediates
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
coarse: bool = True,
):
""" Prune layers not required for specified intermediates.
"""
if coarse:
take_indices, max_index = feature_take_indices(len(self.stage_ends), indices)
max_index = self.stage_ends[max_index]
else:
take_indices, max_index = feature_take_indices(len(self.blocks), indices)
self.blocks = self.blocks[:max_index + 1] # truncate blocks
if prune_head:
self.head.reset(0, reset_other=prune_norm)
return take_indices
def forward_features(self, x: torch.Tensor) -> torch.Tensor:
x = self.patch_embed(x) # BHWC
x = self._pos_embed(x)
for i, blk in enumerate(self.blocks):
x = blk(x)
return x
def forward_head(self, x, pre_logits: bool = False) -> torch.Tensor:
x = self.head(x, pre_logits=pre_logits) if pre_logits else self.head(x)
return x
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.forward_features(x)
x = self.forward_head(x)
return x
# NOTE sam2 appears to use 1024x1024 for all models, but T, S, & B+ have windows that fit multiples of 224.
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 0, 'input_size': (3, 896, 896), 'pool_size': (28, 28),
'crop_pct': 1.0, 'interpolation': 'bicubic', 'min_input_size': (3, 224, 224),
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'patch_embed.proj', 'classifier': 'head.fc',
**kwargs
}
default_cfgs = generate_default_cfgs({
"sam2_hiera_tiny.fb_r896": _cfg(
# hf_hub_id='facebook/sam2-hiera-tiny',
# hf_hub_filename='sam2_hiera_tiny.pt',
hf_hub_id='timm/',
),
"sam2_hiera_tiny.fb_r896_2pt1": _cfg(
# hf_hub_id='facebook/sam2.1-hiera-tiny',
# hf_hub_filename='sam2.1_hiera_tiny.pt',
hf_hub_id='timm/',
),
"sam2_hiera_small.fb_r896": _cfg(
# hf_hub_id='facebook/sam2-hiera-small',
# hf_hub_filename='sam2_hiera_small.pt',
hf_hub_id='timm/',
),
"sam2_hiera_small.fb_r896_2pt1": _cfg(
# hf_hub_id='facebook/sam2.1-hiera-small',
# hf_hub_filename='sam2.1_hiera_small.pt',
hf_hub_id='timm/',
),
"sam2_hiera_base_plus.fb_r896": _cfg(
# hf_hub_id='facebook/sam2-hiera-base-plus',
# hf_hub_filename='sam2_hiera_base_plus.pt',
hf_hub_id='timm/',
),
"sam2_hiera_base_plus.fb_r896_2pt1": _cfg(
# hf_hub_id='facebook/sam2.1-hiera-base-plus',
# hf_hub_filename='sam2.1_hiera_base_plus.pt',
hf_hub_id='timm/',
),
"sam2_hiera_large.fb_r1024": _cfg(
# hf_hub_id='facebook/sam2-hiera-large',
# hf_hub_filename='sam2_hiera_large.pt',
hf_hub_id='timm/',
min_input_size=(3, 256, 256),
input_size=(3, 1024, 1024), pool_size=(32, 32),
),
"sam2_hiera_large.fb_r1024_2pt1": _cfg(
# hf_hub_id='facebook/sam2.1-hiera-large',
# hf_hub_filename='sam2.1_hiera_large.pt',
hf_hub_id='timm/',
min_input_size=(3, 256, 256),
input_size=(3, 1024, 1024), pool_size=(32, 32),
),
"hieradet_small.untrained": _cfg(
num_classes=1000,
input_size=(3, 256, 256), pool_size=(8, 8),
),
})
def checkpoint_filter_fn(state_dict, model=None, prefix=''):
state_dict = state_dict.get('model', state_dict)
output = {}
for k, v in state_dict.items():
if k.startswith(prefix):
k = k.replace(prefix, '')
else:
continue
k = k.replace('mlp.layers.0', 'mlp.fc1')
k = k.replace('mlp.layers.1', 'mlp.fc2')
output[k] = v
return output
def _create_hiera_det(variant: str, pretrained: bool = False, **kwargs) -> HieraDet:
out_indices = kwargs.pop('out_indices', 4)
checkpoint_prefix = ''
# if 'sam2' in variant:
# # SAM2 pretrained weights have no classifier or final norm-layer (`head.norm`)
# # This is workaround loading with num_classes=0 w/o removing norm-layer.
# kwargs.setdefault('pretrained_strict', False)
# checkpoint_prefix = 'image_encoder.trunk.'
return build_model_with_cfg(
HieraDet,
variant,
pretrained,
pretrained_filter_fn=partial(checkpoint_filter_fn, prefix=checkpoint_prefix),
feature_cfg=dict(out_indices=out_indices, feature_cls='getter'),
**kwargs,
)
@register_model
def sam2_hiera_tiny(pretrained=False, **kwargs):
model_args = dict(stages=(1, 2, 7, 2), global_att_blocks=(5, 7, 9))
return _create_hiera_det('sam2_hiera_tiny', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def sam2_hiera_small(pretrained=False, **kwargs):
model_args = dict(stages=(1, 2, 11, 2), global_att_blocks=(7, 10, 13))
return _create_hiera_det('sam2_hiera_small', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def sam2_hiera_base_plus(pretrained=False, **kwargs):
model_args = dict(embed_dim=112, num_heads=2, global_pos_size=(14, 14))
return _create_hiera_det('sam2_hiera_base_plus', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def sam2_hiera_large(pretrained=False, **kwargs):
model_args = dict(
embed_dim=144,
num_heads=2,
stages=(2, 6, 36, 4),
global_att_blocks=(23, 33, 43),
window_spec=(8, 4, 16, 8),
)
return _create_hiera_det('sam2_hiera_large', pretrained=pretrained, **dict(model_args, **kwargs))
@register_model
def hieradet_small(pretrained=False, **kwargs):
model_args = dict(stages=(1, 2, 11, 2), global_att_blocks=(7, 10, 13), window_spec=(8, 4, 16, 8), init_values=1e-5)
return _create_hiera_det('hieradet_small', pretrained=pretrained, **dict(model_args, **kwargs))
# @register_model
# def hieradet_base(pretrained=False, **kwargs):
# model_args = dict(window_spec=(8, 4, 16, 8))
# return _create_hiera_det('hieradet_base', pretrained=pretrained, **dict(model_args, **kwargs))
| pytorch-image-models/timm/models/hieradet_sam2.py/0 | {
"file_path": "pytorch-image-models/timm/models/hieradet_sam2.py",
"repo_id": "pytorch-image-models",
"token_count": 11828
} |
""" NasNet-A (Large)
nasnetalarge implementation grabbed from Cadene's pretrained models
https://github.com/Cadene/pretrained-models.pytorch
"""
from functools import partial
from typing import Optional
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.layers import ConvNormAct, create_conv2d, create_pool2d, create_classifier
from ._builder import build_model_with_cfg
from ._registry import register_model, generate_default_cfgs
__all__ = ['NASNetALarge']
class ActConvBn(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=''):
super(ActConvBn, self).__init__()
self.act = nn.ReLU()
self.conv = create_conv2d(
in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.1)
def forward(self, x):
x = self.act(x)
x = self.conv(x)
x = self.bn(x)
return x
class SeparableConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding=''):
super(SeparableConv2d, self).__init__()
self.depthwise_conv2d = create_conv2d(
in_channels, in_channels, kernel_size=kernel_size,
stride=stride, padding=padding, groups=in_channels)
self.pointwise_conv2d = create_conv2d(
in_channels, out_channels, kernel_size=1, padding=0)
def forward(self, x):
x = self.depthwise_conv2d(x)
x = self.pointwise_conv2d(x)
return x
class BranchSeparables(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, pad_type='', stem_cell=False):
super(BranchSeparables, self).__init__()
middle_channels = out_channels if stem_cell else in_channels
self.act_1 = nn.ReLU()
self.separable_1 = SeparableConv2d(
in_channels, middle_channels, kernel_size, stride=stride, padding=pad_type)
self.bn_sep_1 = nn.BatchNorm2d(middle_channels, eps=0.001, momentum=0.1)
self.act_2 = nn.ReLU(inplace=True)
self.separable_2 = SeparableConv2d(
middle_channels, out_channels, kernel_size, stride=1, padding=pad_type)
self.bn_sep_2 = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.1)
def forward(self, x):
x = self.act_1(x)
x = self.separable_1(x)
x = self.bn_sep_1(x)
x = self.act_2(x)
x = self.separable_2(x)
x = self.bn_sep_2(x)
return x
class CellStem0(nn.Module):
def __init__(self, stem_size, num_channels=42, pad_type=''):
super(CellStem0, self).__init__()
self.num_channels = num_channels
self.stem_size = stem_size
self.conv_1x1 = ActConvBn(self.stem_size, self.num_channels, 1, stride=1)
self.comb_iter_0_left = BranchSeparables(self.num_channels, self.num_channels, 5, 2, pad_type)
self.comb_iter_0_right = BranchSeparables(self.stem_size, self.num_channels, 7, 2, pad_type, stem_cell=True)
self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type)
self.comb_iter_1_right = BranchSeparables(self.stem_size, self.num_channels, 7, 2, pad_type, stem_cell=True)
self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type)
self.comb_iter_2_right = BranchSeparables(self.stem_size, self.num_channels, 5, 2, pad_type, stem_cell=True)
self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(self.num_channels, self.num_channels, 3, 1, pad_type)
self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type)
def forward(self, x):
x1 = self.conv_1x1(x)
x_comb_iter_0_left = self.comb_iter_0_left(x1)
x_comb_iter_0_right = self.comb_iter_0_right(x)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x1)
x_comb_iter_1_right = self.comb_iter_1_right(x)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x1)
x_comb_iter_2_right = self.comb_iter_2_right(x)
x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right
x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0)
x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1
x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0)
x_comb_iter_4_right = self.comb_iter_4_right(x1)
x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right
x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class CellStem1(nn.Module):
def __init__(self, stem_size, num_channels, pad_type=''):
super(CellStem1, self).__init__()
self.num_channels = num_channels
self.stem_size = stem_size
self.conv_1x1 = ActConvBn(2 * self.num_channels, self.num_channels, 1, stride=1)
self.act = nn.ReLU()
self.path_1 = nn.Sequential()
self.path_1.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False))
self.path_1.add_module('conv', nn.Conv2d(self.stem_size, self.num_channels // 2, 1, stride=1, bias=False))
self.path_2 = nn.Sequential()
self.path_2.add_module('pad', nn.ZeroPad2d((-1, 1, -1, 1)))
self.path_2.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False))
self.path_2.add_module('conv', nn.Conv2d(self.stem_size, self.num_channels // 2, 1, stride=1, bias=False))
self.final_path_bn = nn.BatchNorm2d(self.num_channels, eps=0.001, momentum=0.1)
self.comb_iter_0_left = BranchSeparables(self.num_channels, self.num_channels, 5, 2, pad_type)
self.comb_iter_0_right = BranchSeparables(self.num_channels, self.num_channels, 7, 2, pad_type)
self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type)
self.comb_iter_1_right = BranchSeparables(self.num_channels, self.num_channels, 7, 2, pad_type)
self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type)
self.comb_iter_2_right = BranchSeparables(self.num_channels, self.num_channels, 5, 2, pad_type)
self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(self.num_channels, self.num_channels, 3, 1, pad_type)
self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type)
def forward(self, x_conv0, x_stem_0):
x_left = self.conv_1x1(x_stem_0)
x_relu = self.act(x_conv0)
# path 1
x_path1 = self.path_1(x_relu)
# path 2
x_path2 = self.path_2(x_relu)
# final path
x_right = self.final_path_bn(torch.cat([x_path1, x_path2], 1))
x_comb_iter_0_left = self.comb_iter_0_left(x_left)
x_comb_iter_0_right = self.comb_iter_0_right(x_right)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x_left)
x_comb_iter_1_right = self.comb_iter_1_right(x_right)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x_left)
x_comb_iter_2_right = self.comb_iter_2_right(x_right)
x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right
x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0)
x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1
x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0)
x_comb_iter_4_right = self.comb_iter_4_right(x_left)
x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right
x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class FirstCell(nn.Module):
def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''):
super(FirstCell, self).__init__()
self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1)
self.act = nn.ReLU()
self.path_1 = nn.Sequential()
self.path_1.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False))
self.path_1.add_module('conv', nn.Conv2d(in_chs_left, out_chs_left, 1, stride=1, bias=False))
self.path_2 = nn.Sequential()
self.path_2.add_module('pad', nn.ZeroPad2d((-1, 1, -1, 1)))
self.path_2.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False))
self.path_2.add_module('conv', nn.Conv2d(in_chs_left, out_chs_left, 1, stride=1, bias=False))
self.final_path_bn = nn.BatchNorm2d(out_chs_left * 2, eps=0.001, momentum=0.1)
self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 1, pad_type)
self.comb_iter_0_right = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type)
self.comb_iter_1_left = BranchSeparables(out_chs_right, out_chs_right, 5, 1, pad_type)
self.comb_iter_1_right = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type)
self.comb_iter_2_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_3_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type)
def forward(self, x, x_prev):
x_relu = self.act(x_prev)
x_path1 = self.path_1(x_relu)
x_path2 = self.path_2(x_relu)
x_left = self.final_path_bn(torch.cat([x_path1, x_path2], 1))
x_right = self.conv_1x1(x)
x_comb_iter_0_left = self.comb_iter_0_left(x_right)
x_comb_iter_0_right = self.comb_iter_0_right(x_left)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x_left)
x_comb_iter_1_right = self.comb_iter_1_right(x_left)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x_right)
x_comb_iter_2 = x_comb_iter_2_left + x_left
x_comb_iter_3_left = self.comb_iter_3_left(x_left)
x_comb_iter_3_right = self.comb_iter_3_right(x_left)
x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right
x_comb_iter_4_left = self.comb_iter_4_left(x_right)
x_comb_iter_4 = x_comb_iter_4_left + x_right
x_out = torch.cat([x_left, x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class NormalCell(nn.Module):
def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''):
super(NormalCell, self).__init__()
self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, 1, stride=1, padding=pad_type)
self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1, padding=pad_type)
self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 1, pad_type)
self.comb_iter_0_right = BranchSeparables(out_chs_left, out_chs_left, 3, 1, pad_type)
self.comb_iter_1_left = BranchSeparables(out_chs_left, out_chs_left, 5, 1, pad_type)
self.comb_iter_1_right = BranchSeparables(out_chs_left, out_chs_left, 3, 1, pad_type)
self.comb_iter_2_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_3_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type)
def forward(self, x, x_prev):
x_left = self.conv_prev_1x1(x_prev)
x_right = self.conv_1x1(x)
x_comb_iter_0_left = self.comb_iter_0_left(x_right)
x_comb_iter_0_right = self.comb_iter_0_right(x_left)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x_left)
x_comb_iter_1_right = self.comb_iter_1_right(x_left)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x_right)
x_comb_iter_2 = x_comb_iter_2_left + x_left
x_comb_iter_3_left = self.comb_iter_3_left(x_left)
x_comb_iter_3_right = self.comb_iter_3_right(x_left)
x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right
x_comb_iter_4_left = self.comb_iter_4_left(x_right)
x_comb_iter_4 = x_comb_iter_4_left + x_right
x_out = torch.cat([x_left, x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class ReductionCell0(nn.Module):
def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''):
super(ReductionCell0, self).__init__()
self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, 1, stride=1, padding=pad_type)
self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1, padding=pad_type)
self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type)
self.comb_iter_0_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type)
self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type)
self.comb_iter_1_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type)
self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type)
self.comb_iter_2_right = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type)
self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type)
self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type)
def forward(self, x, x_prev):
x_left = self.conv_prev_1x1(x_prev)
x_right = self.conv_1x1(x)
x_comb_iter_0_left = self.comb_iter_0_left(x_right)
x_comb_iter_0_right = self.comb_iter_0_right(x_left)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x_right)
x_comb_iter_1_right = self.comb_iter_1_right(x_left)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x_right)
x_comb_iter_2_right = self.comb_iter_2_right(x_left)
x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right
x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0)
x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1
x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0)
x_comb_iter_4_right = self.comb_iter_4_right(x_right)
x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right
x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class ReductionCell1(nn.Module):
def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''):
super(ReductionCell1, self).__init__()
self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, 1, stride=1, padding=pad_type)
self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1, padding=pad_type)
self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type)
self.comb_iter_0_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type)
self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type)
self.comb_iter_1_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type)
self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type)
self.comb_iter_2_right = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type)
self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type)
self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type)
self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type)
def forward(self, x, x_prev):
x_left = self.conv_prev_1x1(x_prev)
x_right = self.conv_1x1(x)
x_comb_iter_0_left = self.comb_iter_0_left(x_right)
x_comb_iter_0_right = self.comb_iter_0_right(x_left)
x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right
x_comb_iter_1_left = self.comb_iter_1_left(x_right)
x_comb_iter_1_right = self.comb_iter_1_right(x_left)
x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right
x_comb_iter_2_left = self.comb_iter_2_left(x_right)
x_comb_iter_2_right = self.comb_iter_2_right(x_left)
x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right
x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0)
x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1
x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0)
x_comb_iter_4_right = self.comb_iter_4_right(x_right)
x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right
x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1)
return x_out
class NASNetALarge(nn.Module):
"""NASNetALarge (6 @ 4032) """
def __init__(
self,
num_classes=1000,
in_chans=3,
stem_size=96,
channel_multiplier=2,
num_features=4032,
output_stride=32,
drop_rate=0.,
global_pool='avg',
pad_type='same',
):
super(NASNetALarge, self).__init__()
self.num_classes = num_classes
self.stem_size = stem_size
self.num_features = self.head_hidden_size = num_features
self.channel_multiplier = channel_multiplier
assert output_stride == 32
channels = self.num_features // 24
# 24 is default value for the architecture
self.conv0 = ConvNormAct(
in_channels=in_chans, out_channels=self.stem_size, kernel_size=3, padding=0, stride=2,
norm_layer=partial(nn.BatchNorm2d, eps=0.001, momentum=0.1), apply_act=False)
self.cell_stem_0 = CellStem0(
self.stem_size, num_channels=channels // (channel_multiplier ** 2), pad_type=pad_type)
self.cell_stem_1 = CellStem1(
self.stem_size, num_channels=channels // channel_multiplier, pad_type=pad_type)
self.cell_0 = FirstCell(
in_chs_left=channels, out_chs_left=channels // 2,
in_chs_right=2 * channels, out_chs_right=channels, pad_type=pad_type)
self.cell_1 = NormalCell(
in_chs_left=2 * channels, out_chs_left=channels,
in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type)
self.cell_2 = NormalCell(
in_chs_left=6 * channels, out_chs_left=channels,
in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type)
self.cell_3 = NormalCell(
in_chs_left=6 * channels, out_chs_left=channels,
in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type)
self.cell_4 = NormalCell(
in_chs_left=6 * channels, out_chs_left=channels,
in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type)
self.cell_5 = NormalCell(
in_chs_left=6 * channels, out_chs_left=channels,
in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type)
self.reduction_cell_0 = ReductionCell0(
in_chs_left=6 * channels, out_chs_left=2 * channels,
in_chs_right=6 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.cell_6 = FirstCell(
in_chs_left=6 * channels, out_chs_left=channels,
in_chs_right=8 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.cell_7 = NormalCell(
in_chs_left=8 * channels, out_chs_left=2 * channels,
in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.cell_8 = NormalCell(
in_chs_left=12 * channels, out_chs_left=2 * channels,
in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.cell_9 = NormalCell(
in_chs_left=12 * channels, out_chs_left=2 * channels,
in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.cell_10 = NormalCell(
in_chs_left=12 * channels, out_chs_left=2 * channels,
in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.cell_11 = NormalCell(
in_chs_left=12 * channels, out_chs_left=2 * channels,
in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type)
self.reduction_cell_1 = ReductionCell1(
in_chs_left=12 * channels, out_chs_left=4 * channels,
in_chs_right=12 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.cell_12 = FirstCell(
in_chs_left=12 * channels, out_chs_left=2 * channels,
in_chs_right=16 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.cell_13 = NormalCell(
in_chs_left=16 * channels, out_chs_left=4 * channels,
in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.cell_14 = NormalCell(
in_chs_left=24 * channels, out_chs_left=4 * channels,
in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.cell_15 = NormalCell(
in_chs_left=24 * channels, out_chs_left=4 * channels,
in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.cell_16 = NormalCell(
in_chs_left=24 * channels, out_chs_left=4 * channels,
in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.cell_17 = NormalCell(
in_chs_left=24 * channels, out_chs_left=4 * channels,
in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type)
self.act = nn.ReLU(inplace=True)
self.feature_info = [
dict(num_chs=96, reduction=2, module='conv0'),
dict(num_chs=168, reduction=4, module='cell_stem_1.conv_1x1.act'),
dict(num_chs=1008, reduction=8, module='reduction_cell_0.conv_1x1.act'),
dict(num_chs=2016, reduction=16, module='reduction_cell_1.conv_1x1.act'),
dict(num_chs=4032, reduction=32, module='act'),
]
self.global_pool, self.head_drop, self.last_linear = create_classifier(
self.num_features, self.num_classes, pool_type=global_pool, drop_rate=drop_rate)
@torch.jit.ignore
def group_matcher(self, coarse=False):
matcher = dict(
stem=r'^conv0|cell_stem_[01]',
blocks=[
(r'^cell_(\d+)', None),
(r'^reduction_cell_0', (6,)),
(r'^reduction_cell_1', (12,)),
]
)
return matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
assert not enable, 'gradient checkpointing not supported'
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.last_linear
def reset_classifier(self, num_classes: int, global_pool: str = 'avg'):
self.num_classes = num_classes
self.global_pool, self.last_linear = create_classifier(
self.num_features, self.num_classes, pool_type=global_pool)
def forward_features(self, x):
x_conv0 = self.conv0(x)
x_stem_0 = self.cell_stem_0(x_conv0)
x_stem_1 = self.cell_stem_1(x_conv0, x_stem_0)
x_cell_0 = self.cell_0(x_stem_1, x_stem_0)
x_cell_1 = self.cell_1(x_cell_0, x_stem_1)
x_cell_2 = self.cell_2(x_cell_1, x_cell_0)
x_cell_3 = self.cell_3(x_cell_2, x_cell_1)
x_cell_4 = self.cell_4(x_cell_3, x_cell_2)
x_cell_5 = self.cell_5(x_cell_4, x_cell_3)
x_reduction_cell_0 = self.reduction_cell_0(x_cell_5, x_cell_4)
x_cell_6 = self.cell_6(x_reduction_cell_0, x_cell_4)
x_cell_7 = self.cell_7(x_cell_6, x_reduction_cell_0)
x_cell_8 = self.cell_8(x_cell_7, x_cell_6)
x_cell_9 = self.cell_9(x_cell_8, x_cell_7)
x_cell_10 = self.cell_10(x_cell_9, x_cell_8)
x_cell_11 = self.cell_11(x_cell_10, x_cell_9)
x_reduction_cell_1 = self.reduction_cell_1(x_cell_11, x_cell_10)
x_cell_12 = self.cell_12(x_reduction_cell_1, x_cell_10)
x_cell_13 = self.cell_13(x_cell_12, x_reduction_cell_1)
x_cell_14 = self.cell_14(x_cell_13, x_cell_12)
x_cell_15 = self.cell_15(x_cell_14, x_cell_13)
x_cell_16 = self.cell_16(x_cell_15, x_cell_14)
x_cell_17 = self.cell_17(x_cell_16, x_cell_15)
x = self.act(x_cell_17)
return x
def forward_head(self, x, pre_logits: bool = False):
x = self.global_pool(x)
x = self.head_drop(x)
return x if pre_logits else self.last_linear(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_nasnet(variant, pretrained=False, **kwargs):
return build_model_with_cfg(
NASNetALarge,
variant,
pretrained,
feature_cfg=dict(feature_cls='hook', no_rewrite=True), # not possible to re-write this model
**kwargs,
)
default_cfgs = generate_default_cfgs({
'nasnetalarge.tf_in1k': {
'hf_hub_id': 'timm/',
'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/nasnetalarge-dc4a7b8b.pth',
'input_size': (3, 331, 331),
'pool_size': (11, 11),
'crop_pct': 0.911,
'interpolation': 'bicubic',
'mean': (0.5, 0.5, 0.5),
'std': (0.5, 0.5, 0.5),
'num_classes': 1000,
'first_conv': 'conv0.conv',
'classifier': 'last_linear',
},
})
@register_model
def nasnetalarge(pretrained=False, **kwargs) -> NASNetALarge:
"""NASNet-A large model architecture.
"""
model_kwargs = dict(pad_type='same', **kwargs)
return _create_nasnet('nasnetalarge', pretrained, **model_kwargs)
| pytorch-image-models/timm/models/nasnet.py/0 | {
"file_path": "pytorch-image-models/timm/models/nasnet.py",
"repo_id": "pytorch-image-models",
"token_count": 13276
} |
""" ReXNet
A PyTorch impl of `ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network` -
https://arxiv.org/abs/2007.00992
Adapted from original impl at https://github.com/clovaai/rexnet
Copyright (c) 2020-present NAVER Corp. MIT license
Changes for timm, feature extraction, and rounded channel variant hacked together by Ross Wightman
Copyright 2020 Ross Wightman
"""
from functools import partial
from math import ceil
from typing import Optional
import torch
import torch.nn as nn
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from timm.layers import ClassifierHead, create_act_layer, ConvNormAct, DropPath, make_divisible, SEModule
from ._builder import build_model_with_cfg
from ._efficientnet_builder import efficientnet_init_weights
from ._manipulate import checkpoint_seq
from ._registry import generate_default_cfgs, register_model
__all__ = ['RexNet'] # model_registry will add each entrypoint fn to this
SEWithNorm = partial(SEModule, norm_layer=nn.BatchNorm2d)
class LinearBottleneck(nn.Module):
def __init__(
self,
in_chs,
out_chs,
stride,
dilation=(1, 1),
exp_ratio=1.0,
se_ratio=0.,
ch_div=1,
act_layer='swish',
dw_act_layer='relu6',
drop_path=None,
):
super(LinearBottleneck, self).__init__()
self.use_shortcut = stride == 1 and dilation[0] == dilation[1] and in_chs <= out_chs
self.in_channels = in_chs
self.out_channels = out_chs
if exp_ratio != 1.:
dw_chs = make_divisible(round(in_chs * exp_ratio), divisor=ch_div)
self.conv_exp = ConvNormAct(in_chs, dw_chs, act_layer=act_layer)
else:
dw_chs = in_chs
self.conv_exp = None
self.conv_dw = ConvNormAct(
dw_chs,
dw_chs,
kernel_size=3,
stride=stride,
dilation=dilation[0],
groups=dw_chs,
apply_act=False,
)
if se_ratio > 0:
self.se = SEWithNorm(dw_chs, rd_channels=make_divisible(int(dw_chs * se_ratio), ch_div))
else:
self.se = None
self.act_dw = create_act_layer(dw_act_layer)
self.conv_pwl = ConvNormAct(dw_chs, out_chs, 1, apply_act=False)
self.drop_path = drop_path
def feat_channels(self, exp=False):
return self.conv_dw.out_channels if exp else self.out_channels
def forward(self, x):
shortcut = x
if self.conv_exp is not None:
x = self.conv_exp(x)
x = self.conv_dw(x)
if self.se is not None:
x = self.se(x)
x = self.act_dw(x)
x = self.conv_pwl(x)
if self.use_shortcut:
if self.drop_path is not None:
x = self.drop_path(x)
x = torch.cat([x[:, 0:self.in_channels] + shortcut, x[:, self.in_channels:]], dim=1)
return x
def _block_cfg(
width_mult=1.0,
depth_mult=1.0,
initial_chs=16,
final_chs=180,
se_ratio=0.,
ch_div=1,
):
layers = [1, 2, 2, 3, 3, 5]
strides = [1, 2, 2, 2, 1, 2]
layers = [ceil(element * depth_mult) for element in layers]
strides = sum([[element] + [1] * (layers[idx] - 1) for idx, element in enumerate(strides)], [])
exp_ratios = [1] * layers[0] + [6] * sum(layers[1:])
depth = sum(layers[:]) * 3
base_chs = initial_chs / width_mult if width_mult < 1.0 else initial_chs
# The following channel configuration is a simple instance to make each layer become an expand layer.
out_chs_list = []
for i in range(depth // 3):
out_chs_list.append(make_divisible(round(base_chs * width_mult), divisor=ch_div))
base_chs += final_chs / (depth // 3 * 1.0)
se_ratios = [0.] * (layers[0] + layers[1]) + [se_ratio] * sum(layers[2:])
return list(zip(out_chs_list, exp_ratios, strides, se_ratios))
def _build_blocks(
block_cfg,
prev_chs,
width_mult,
ch_div=1,
output_stride=32,
act_layer='swish',
dw_act_layer='relu6',
drop_path_rate=0.,
):
feat_chs = [prev_chs]
feature_info = []
curr_stride = 2
dilation = 1
features = []
num_blocks = len(block_cfg)
for block_idx, (chs, exp_ratio, stride, se_ratio) in enumerate(block_cfg):
next_dilation = dilation
if stride > 1:
fname = 'stem' if block_idx == 0 else f'features.{block_idx - 1}'
feature_info += [dict(num_chs=feat_chs[-1], reduction=curr_stride, module=fname)]
if curr_stride >= output_stride:
next_dilation = dilation * stride
stride = 1
block_dpr = drop_path_rate * block_idx / (num_blocks - 1) # stochastic depth linear decay rule
drop_path = DropPath(block_dpr) if block_dpr > 0. else None
features.append(LinearBottleneck(
in_chs=prev_chs,
out_chs=chs,
exp_ratio=exp_ratio,
stride=stride,
dilation=(dilation, next_dilation),
se_ratio=se_ratio,
ch_div=ch_div,
act_layer=act_layer,
dw_act_layer=dw_act_layer,
drop_path=drop_path,
))
curr_stride *= stride
dilation = next_dilation
prev_chs = chs
feat_chs += [features[-1].feat_channels()]
pen_chs = make_divisible(1280 * width_mult, divisor=ch_div)
feature_info += [dict(num_chs=feat_chs[-1], reduction=curr_stride, module=f'features.{len(features) - 1}')]
features.append(ConvNormAct(prev_chs, pen_chs, act_layer=act_layer))
return features, feature_info
class RexNet(nn.Module):
def __init__(
self,
in_chans=3,
num_classes=1000,
global_pool='avg',
output_stride=32,
initial_chs=16,
final_chs=180,
width_mult=1.0,
depth_mult=1.0,
se_ratio=1/12.,
ch_div=1,
act_layer='swish',
dw_act_layer='relu6',
drop_rate=0.2,
drop_path_rate=0.,
):
super(RexNet, self).__init__()
self.num_classes = num_classes
self.drop_rate = drop_rate
self.grad_checkpointing = False
assert output_stride in (32, 16, 8)
stem_base_chs = 32 / width_mult if width_mult < 1.0 else 32
stem_chs = make_divisible(round(stem_base_chs * width_mult), divisor=ch_div)
self.stem = ConvNormAct(in_chans, stem_chs, 3, stride=2, act_layer=act_layer)
block_cfg = _block_cfg(width_mult, depth_mult, initial_chs, final_chs, se_ratio, ch_div)
features, self.feature_info = _build_blocks(
block_cfg,
stem_chs,
width_mult,
ch_div,
output_stride,
act_layer,
dw_act_layer,
drop_path_rate,
)
self.num_features = self.head_hidden_size = features[-1].out_channels
self.features = nn.Sequential(*features)
self.head = ClassifierHead(self.num_features, num_classes, global_pool, drop_rate)
efficientnet_init_weights(self)
@torch.jit.ignore
def group_matcher(self, coarse=False):
matcher = dict(
stem=r'^stem',
blocks=r'^features\.(\d+)',
)
return matcher
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
self.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head.fc
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
self.head.reset(num_classes, global_pool)
def forward_features(self, x):
x = self.stem(x)
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint_seq(self.features, x, flatten=True)
else:
x = self.features(x)
return x
def forward_head(self, x, pre_logits: bool = False):
return self.head(x, pre_logits=pre_logits) if pre_logits else self.head(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_rexnet(variant, pretrained, **kwargs):
feature_cfg = dict(flatten_sequential=True)
return build_model_with_cfg(
RexNet,
variant,
pretrained,
feature_cfg=feature_cfg,
**kwargs,
)
def _cfg(url='', **kwargs):
return {
'url': url, 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
'crop_pct': 0.875, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'first_conv': 'stem.conv', 'classifier': 'head.fc',
'license': 'mit', **kwargs
}
default_cfgs = generate_default_cfgs({
'rexnet_100.nav_in1k': _cfg(hf_hub_id='timm/'),
'rexnet_130.nav_in1k': _cfg(hf_hub_id='timm/'),
'rexnet_150.nav_in1k': _cfg(hf_hub_id='timm/'),
'rexnet_200.nav_in1k': _cfg(hf_hub_id='timm/'),
'rexnet_300.nav_in1k': _cfg(hf_hub_id='timm/'),
'rexnetr_100.untrained': _cfg(),
'rexnetr_130.untrained': _cfg(),
'rexnetr_150.untrained': _cfg(),
'rexnetr_200.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/',
crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 288, 288), license='apache-2.0'),
'rexnetr_300.sw_in12k_ft_in1k': _cfg(
hf_hub_id='timm/',
crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 288, 288), license='apache-2.0'),
'rexnetr_200.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821,
crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 288, 288), license='apache-2.0'),
'rexnetr_300.sw_in12k': _cfg(
hf_hub_id='timm/',
num_classes=11821,
crop_pct=0.95, test_crop_pct=1.0, test_input_size=(3, 288, 288), license='apache-2.0'),
})
@register_model
def rexnet_100(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 1.0x"""
return _create_rexnet('rexnet_100', pretrained, **kwargs)
@register_model
def rexnet_130(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 1.3x"""
return _create_rexnet('rexnet_130', pretrained, width_mult=1.3, **kwargs)
@register_model
def rexnet_150(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 1.5x"""
return _create_rexnet('rexnet_150', pretrained, width_mult=1.5, **kwargs)
@register_model
def rexnet_200(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 2.0x"""
return _create_rexnet('rexnet_200', pretrained, width_mult=2.0, **kwargs)
@register_model
def rexnet_300(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 3.0x"""
return _create_rexnet('rexnet_300', pretrained, width_mult=3.0, **kwargs)
@register_model
def rexnetr_100(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 1.0x w/ rounded (mod 8) channels"""
return _create_rexnet('rexnetr_100', pretrained, ch_div=8, **kwargs)
@register_model
def rexnetr_130(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 1.3x w/ rounded (mod 8) channels"""
return _create_rexnet('rexnetr_130', pretrained, width_mult=1.3, ch_div=8, **kwargs)
@register_model
def rexnetr_150(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 1.5x w/ rounded (mod 8) channels"""
return _create_rexnet('rexnetr_150', pretrained, width_mult=1.5, ch_div=8, **kwargs)
@register_model
def rexnetr_200(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 2.0x w/ rounded (mod 8) channels"""
return _create_rexnet('rexnetr_200', pretrained, width_mult=2.0, ch_div=8, **kwargs)
@register_model
def rexnetr_300(pretrained=False, **kwargs) -> RexNet:
"""ReXNet V1 3.0x w/ rounded (mod 16) channels"""
return _create_rexnet('rexnetr_300', pretrained, width_mult=3.0, ch_div=16, **kwargs)
| pytorch-image-models/timm/models/rexnet.py/0 | {
"file_path": "pytorch-image-models/timm/models/rexnet.py",
"repo_id": "pytorch-image-models",
"token_count": 5803
} |
""" Relative Position Vision Transformer (ViT) in PyTorch
NOTE: these models are experimental / WIP, expect changes
Hacked together by / Copyright 2022, Ross Wightman
"""
import logging
import math
from functools import partial
from typing import List, Optional, Tuple, Type, Union
try:
from typing import Literal
except ImportError:
from typing_extensions import Literal
import torch
import torch.nn as nn
from torch.jit import Final
from timm.data import IMAGENET_INCEPTION_MEAN, IMAGENET_INCEPTION_STD
from timm.layers import PatchEmbed, Mlp, DropPath, RelPosMlp, RelPosBias, use_fused_attn, LayerType
from ._builder import build_model_with_cfg
from ._features import feature_take_indices
from ._manipulate import named_apply, checkpoint
from ._registry import generate_default_cfgs, register_model
from .vision_transformer import get_init_weights_vit
__all__ = ['VisionTransformerRelPos'] # model_registry will add each entrypoint fn to this
_logger = logging.getLogger(__name__)
class RelPosAttention(nn.Module):
fused_attn: Final[bool]
def __init__(
self,
dim,
num_heads=8,
qkv_bias=False,
qk_norm=False,
rel_pos_cls=None,
attn_drop=0.,
proj_drop=0.,
norm_layer=nn.LayerNorm,
):
super().__init__()
assert dim % num_heads == 0, 'dim should be divisible by num_heads'
self.num_heads = num_heads
self.head_dim = dim // num_heads
self.scale = self.head_dim ** -0.5
self.fused_attn = use_fused_attn()
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.q_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity()
self.k_norm = norm_layer(self.head_dim) if qk_norm else nn.Identity()
self.rel_pos = rel_pos_cls(num_heads=num_heads) if rel_pos_cls else None
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x, shared_rel_pos: Optional[torch.Tensor] = None):
B, N, C = x.shape
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)
q, k, v = qkv.unbind(0)
q = self.q_norm(q)
k = self.k_norm(k)
if self.fused_attn:
if self.rel_pos is not None:
attn_bias = self.rel_pos.get_bias()
elif shared_rel_pos is not None:
attn_bias = shared_rel_pos
else:
attn_bias = None
x = torch.nn.functional.scaled_dot_product_attention(
q, k, v,
attn_mask=attn_bias,
dropout_p=self.attn_drop.p if self.training else 0.,
)
else:
q = q * self.scale
attn = q @ k.transpose(-2, -1)
if self.rel_pos is not None:
attn = self.rel_pos(attn, shared_rel_pos=shared_rel_pos)
elif shared_rel_pos is not None:
attn = attn + shared_rel_pos
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = attn @ v
x = x.transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
class LayerScale(nn.Module):
def __init__(self, dim, init_values=1e-5, inplace=False):
super().__init__()
self.inplace = inplace
self.gamma = nn.Parameter(init_values * torch.ones(dim))
def forward(self, x):
return x.mul_(self.gamma) if self.inplace else x * self.gamma
class RelPosBlock(nn.Module):
def __init__(
self,
dim,
num_heads,
mlp_ratio=4.,
qkv_bias=False,
qk_norm=False,
rel_pos_cls=None,
init_values=None,
proj_drop=0.,
attn_drop=0.,
drop_path=0.,
act_layer=nn.GELU,
norm_layer=nn.LayerNorm,
):
super().__init__()
self.norm1 = norm_layer(dim)
self.attn = RelPosAttention(
dim,
num_heads,
qkv_bias=qkv_bias,
qk_norm=qk_norm,
rel_pos_cls=rel_pos_cls,
attn_drop=attn_drop,
proj_drop=proj_drop,
)
self.ls1 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity()
# NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
self.mlp = Mlp(
in_features=dim,
hidden_features=int(dim * mlp_ratio),
act_layer=act_layer,
drop=proj_drop,
)
self.ls2 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity()
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
def forward(self, x, shared_rel_pos: Optional[torch.Tensor] = None):
x = x + self.drop_path1(self.ls1(self.attn(self.norm1(x), shared_rel_pos=shared_rel_pos)))
x = x + self.drop_path2(self.ls2(self.mlp(self.norm2(x))))
return x
class ResPostRelPosBlock(nn.Module):
def __init__(
self,
dim,
num_heads,
mlp_ratio=4.,
qkv_bias=False,
qk_norm=False,
rel_pos_cls=None,
init_values=None,
proj_drop=0.,
attn_drop=0.,
drop_path=0.,
act_layer=nn.GELU,
norm_layer=nn.LayerNorm,
):
super().__init__()
self.init_values = init_values
self.attn = RelPosAttention(
dim,
num_heads,
qkv_bias=qkv_bias,
qk_norm=qk_norm,
rel_pos_cls=rel_pos_cls,
attn_drop=attn_drop,
proj_drop=proj_drop,
)
self.norm1 = norm_layer(dim)
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.mlp = Mlp(
in_features=dim,
hidden_features=int(dim * mlp_ratio),
act_layer=act_layer,
drop=proj_drop,
)
self.norm2 = norm_layer(dim)
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.init_weights()
def init_weights(self):
# NOTE this init overrides that base model init with specific changes for the block type
if self.init_values is not None:
nn.init.constant_(self.norm1.weight, self.init_values)
nn.init.constant_(self.norm2.weight, self.init_values)
def forward(self, x, shared_rel_pos: Optional[torch.Tensor] = None):
x = x + self.drop_path1(self.norm1(self.attn(x, shared_rel_pos=shared_rel_pos)))
x = x + self.drop_path2(self.norm2(self.mlp(x)))
return x
class VisionTransformerRelPos(nn.Module):
""" Vision Transformer w/ Relative Position Bias
Differing from classic vit, this impl
* uses relative position index (swin v1 / beit) or relative log coord + mlp (swin v2) pos embed
* defaults to no class token (can be enabled)
* defaults to global avg pool for head (can be changed)
* layer-scale (residual branch gain) enabled
"""
def __init__(
self,
img_size: Union[int, Tuple[int, int]] = 224,
patch_size: Union[int, Tuple[int, int]] = 16,
in_chans: int = 3,
num_classes: int = 1000,
global_pool: Literal['', 'avg', 'token', 'map'] = 'avg',
embed_dim: int = 768,
depth: int = 12,
num_heads: int = 12,
mlp_ratio: float = 4.,
qkv_bias: bool = True,
qk_norm: bool = False,
init_values: Optional[float] = 1e-6,
class_token: bool = False,
fc_norm: bool = False,
rel_pos_type: str = 'mlp',
rel_pos_dim: Optional[int] = None,
shared_rel_pos: bool = False,
drop_rate: float = 0.,
proj_drop_rate: float = 0.,
attn_drop_rate: float = 0.,
drop_path_rate: float = 0.,
weight_init: Literal['skip', 'jax', 'moco', ''] = 'skip',
fix_init: bool = False,
embed_layer: Type[nn.Module] = PatchEmbed,
norm_layer: Optional[LayerType] = None,
act_layer: Optional[LayerType] = None,
block_fn: Type[nn.Module] = RelPosBlock
):
"""
Args:
img_size: input image size
patch_size: patch size
in_chans: number of input channels
num_classes: number of classes for classification head
global_pool: type of global pooling for final sequence (default: 'avg')
embed_dim: embedding dimension
depth: depth of transformer
num_heads: number of attention heads
mlp_ratio: ratio of mlp hidden dim to embedding dim
qkv_bias: enable bias for qkv if True
qk_norm: Enable normalization of query and key in attention
init_values: layer-scale init values
class_token: use class token (default: False)
fc_norm: use pre classifier norm instead of pre-pool
rel_pos_type: type of relative position
shared_rel_pos: share relative pos across all blocks
drop_rate: dropout rate
proj_drop_rate: projection dropout rate
attn_drop_rate: attention dropout rate
drop_path_rate: stochastic depth rate
weight_init: weight init scheme
fix_init: apply weight initialization fix (scaling w/ layer index)
embed_layer: patch embedding layer
norm_layer: normalization layer
act_layer: MLP activation layer
"""
super().__init__()
assert global_pool in ('', 'avg', 'token')
assert class_token or global_pool != 'token'
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.num_classes = num_classes
self.global_pool = global_pool
self.num_features = self.head_hidden_size = self.embed_dim = embed_dim # for consistency with other models
self.num_prefix_tokens = 1 if class_token else 0
self.grad_checkpointing = False
self.patch_embed = embed_layer(
img_size=img_size,
patch_size=patch_size,
in_chans=in_chans,
embed_dim=embed_dim,
)
feat_size = self.patch_embed.grid_size
r = self.patch_embed.feat_ratio() if hasattr(self.patch_embed, 'feat_ratio') else patch_size
rel_pos_args = dict(window_size=feat_size, prefix_tokens=self.num_prefix_tokens)
if rel_pos_type.startswith('mlp'):
if rel_pos_dim:
rel_pos_args['hidden_dim'] = rel_pos_dim
if 'swin' in rel_pos_type:
rel_pos_args['mode'] = 'swin'
rel_pos_cls = partial(RelPosMlp, **rel_pos_args)
else:
rel_pos_cls = partial(RelPosBias, **rel_pos_args)
self.shared_rel_pos = None
if shared_rel_pos:
self.shared_rel_pos = rel_pos_cls(num_heads=num_heads)
# NOTE shared rel pos currently mutually exclusive w/ per-block, but could support both...
rel_pos_cls = None
self.cls_token = nn.Parameter(torch.zeros(1, self.num_prefix_tokens, embed_dim)) if class_token else None
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
self.blocks = nn.ModuleList([
block_fn(
dim=embed_dim,
num_heads=num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_norm=qk_norm,
rel_pos_cls=rel_pos_cls,
init_values=init_values,
proj_drop=proj_drop_rate,
attn_drop=attn_drop_rate,
drop_path=dpr[i],
norm_layer=norm_layer,
act_layer=act_layer,
)
for i in range(depth)])
self.feature_info = [
dict(module=f'blocks.{i}', num_chs=embed_dim, reduction=r) for i in range(depth)]
self.norm = norm_layer(embed_dim) if not fc_norm else nn.Identity()
# Classifier Head
self.fc_norm = norm_layer(embed_dim) if fc_norm else nn.Identity()
self.head_drop = nn.Dropout(drop_rate)
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
if weight_init != 'skip':
self.init_weights(weight_init)
if fix_init:
self.fix_init_weight()
def init_weights(self, mode=''):
assert mode in ('jax', 'moco', '')
if self.cls_token is not None:
nn.init.normal_(self.cls_token, std=1e-6)
named_apply(get_init_weights_vit(mode), self)
def fix_init_weight(self):
def rescale(param, _layer_id):
param.div_(math.sqrt(2.0 * _layer_id))
for layer_id, layer in enumerate(self.blocks):
rescale(layer.attn.proj.weight.data, layer_id + 1)
rescale(layer.mlp.fc2.weight.data, layer_id + 1)
@torch.jit.ignore
def no_weight_decay(self):
return {'cls_token'}
@torch.jit.ignore
def group_matcher(self, coarse=False):
return dict(
stem=r'^cls_token|patch_embed', # stem and embed
blocks=[(r'^blocks\.(\d+)', None), (r'^norm', (99999,))]
)
@torch.jit.ignore
def set_grad_checkpointing(self, enable=True):
self.grad_checkpointing = enable
@torch.jit.ignore
def get_classifier(self) -> nn.Module:
return self.head
def reset_classifier(self, num_classes: int, global_pool: Optional[str] = None):
self.num_classes = num_classes
if global_pool is not None:
assert global_pool in ('', 'avg', 'token')
self.global_pool = global_pool
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
def forward_intermediates(
self,
x: torch.Tensor,
indices: Optional[Union[int, List[int]]] = None,
return_prefix_tokens: bool = False,
norm: bool = False,
stop_early: bool = False,
output_fmt: str = 'NCHW',
intermediates_only: bool = False,
) -> Union[List[torch.Tensor], Tuple[torch.Tensor, List[torch.Tensor]]]:
""" Forward features that returns intermediates.
Args:
x: Input image tensor
indices: Take last n blocks if int, all if None, select matching indices if sequence
return_prefix_tokens: Return both prefix and spatial intermediate tokens
norm: Apply norm layer to all intermediates
stop_early: Stop iterating over blocks when last desired intermediate hit
output_fmt: Shape of intermediate feature outputs
intermediates_only: Only return intermediate features
Returns:
"""
assert output_fmt in ('NCHW', 'NLC'), 'Output format must be one of NCHW or NLC.'
reshape = output_fmt == 'NCHW'
intermediates = []
take_indices, max_index = feature_take_indices(len(self.blocks), indices)
# forward pass
B, _, height, width = x.shape
x = self.patch_embed(x)
if self.cls_token is not None:
x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
shared_rel_pos = self.shared_rel_pos.get_bias() if self.shared_rel_pos is not None else None
if torch.jit.is_scripting() or not stop_early: # can't slice blocks in torchscript
blocks = self.blocks
else:
blocks = self.blocks[:max_index + 1]
for i, blk in enumerate(blocks):
x = blk(x, shared_rel_pos=shared_rel_pos)
if i in take_indices:
# normalize intermediates with final norm layer if enabled
intermediates.append(self.norm(x) if norm else x)
# process intermediates
if self.num_prefix_tokens:
# split prefix (e.g. class, distill) and spatial feature tokens
prefix_tokens = [y[:, 0:self.num_prefix_tokens] for y in intermediates]
intermediates = [y[:, self.num_prefix_tokens:] for y in intermediates]
if reshape:
# reshape to BCHW output format
H, W = self.patch_embed.dynamic_feat_size((height, width))
intermediates = [y.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() for y in intermediates]
if not torch.jit.is_scripting() and return_prefix_tokens:
# return_prefix not support in torchscript due to poor type handling
intermediates = list(zip(intermediates, prefix_tokens))
if intermediates_only:
return intermediates
x = self.norm(x)
return x, intermediates
def prune_intermediate_layers(
self,
indices: Union[int, List[int]] = 1,
prune_norm: bool = False,
prune_head: bool = True,
):
""" Prune layers not required for specified intermediates.
"""
take_indices, max_index = feature_take_indices(len(self.blocks), indices)
self.blocks = self.blocks[:max_index + 1] # truncate blocks
if prune_norm:
self.norm = nn.Identity()
if prune_head:
self.fc_norm = nn.Identity()
self.reset_classifier(0, '')
return take_indices
def forward_features(self, x):
x = self.patch_embed(x)
if self.cls_token is not None:
x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
shared_rel_pos = self.shared_rel_pos.get_bias() if self.shared_rel_pos is not None else None
for blk in self.blocks:
if self.grad_checkpointing and not torch.jit.is_scripting():
x = checkpoint(blk, x, shared_rel_pos=shared_rel_pos)
else:
x = blk(x, shared_rel_pos=shared_rel_pos)
x = self.norm(x)
return x
def forward_head(self, x, pre_logits: bool = False):
if self.global_pool:
x = x[:, self.num_prefix_tokens:].mean(dim=1) if self.global_pool == 'avg' else x[:, 0]
x = self.fc_norm(x)
x = self.head_drop(x)
return x if pre_logits else self.head(x)
def forward(self, x):
x = self.forward_features(x)
x = self.forward_head(x)
return x
def _create_vision_transformer_relpos(variant, pretrained=False, **kwargs):
out_indices = kwargs.pop('out_indices', 3)
model = build_model_with_cfg(
VisionTransformerRelPos, variant, pretrained,
feature_cfg=dict(out_indices=out_indices, feature_cls='getter'),
**kwargs,
)
return model
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
'crop_pct': .9, 'interpolation': 'bicubic', 'fixed_input_size': True,
'mean': IMAGENET_INCEPTION_MEAN, 'std': IMAGENET_INCEPTION_STD,
'first_conv': 'patch_embed.proj', 'classifier': 'head',
**kwargs
}
default_cfgs = generate_default_cfgs({
'vit_relpos_base_patch32_plus_rpn_256.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_replos_base_patch32_plus_rpn_256-sw-dd486f51.pth',
hf_hub_id='timm/',
input_size=(3, 256, 256)),
'vit_relpos_base_patch16_plus_240.untrained': _cfg(url='', input_size=(3, 240, 240)),
'vit_relpos_small_patch16_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_relpos_small_patch16_224-sw-ec2778b4.pth',
hf_hub_id='timm/'),
'vit_relpos_medium_patch16_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_relpos_medium_patch16_224-sw-11c174af.pth',
hf_hub_id='timm/'),
'vit_relpos_base_patch16_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_relpos_base_patch16_224-sw-49049aed.pth',
hf_hub_id='timm/'),
'vit_srelpos_small_patch16_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_srelpos_small_patch16_224-sw-6cdb8849.pth',
hf_hub_id='timm/'),
'vit_srelpos_medium_patch16_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_srelpos_medium_patch16_224-sw-ad702b8c.pth',
hf_hub_id='timm/'),
'vit_relpos_medium_patch16_cls_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_relpos_medium_patch16_cls_224-sw-cfe8e259.pth',
hf_hub_id='timm/'),
'vit_relpos_base_patch16_cls_224.untrained': _cfg(),
'vit_relpos_base_patch16_clsgap_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_relpos_base_patch16_gapcls_224-sw-1a341d6c.pth',
hf_hub_id='timm/'),
'vit_relpos_small_patch16_rpn_224.untrained': _cfg(),
'vit_relpos_medium_patch16_rpn_224.sw_in1k': _cfg(
url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tpu-weights/vit_relpos_medium_patch16_rpn_224-sw-5d2befd8.pth',
hf_hub_id='timm/'),
'vit_relpos_base_patch16_rpn_224.untrained': _cfg(),
})
@register_model
def vit_relpos_base_patch32_plus_rpn_256(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/32+) w/ relative log-coord position and residual post-norm, no class token
"""
model_args = dict(patch_size=32, embed_dim=896, depth=12, num_heads=14, block_fn=ResPostRelPosBlock)
model = _create_vision_transformer_relpos(
'vit_relpos_base_patch32_plus_rpn_256', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_base_patch16_plus_240(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16+) w/ relative log-coord position, no class token
"""
model_args = dict(patch_size=16, embed_dim=896, depth=12, num_heads=14)
model = _create_vision_transformer_relpos(
'vit_relpos_base_patch16_plus_240', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_small_patch16_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position, no class token
"""
model_args = dict(patch_size=16, embed_dim=384, depth=12, num_heads=6, qkv_bias=False, fc_norm=True)
model = _create_vision_transformer_relpos(
'vit_relpos_small_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_medium_patch16_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position, no class token
"""
model_args = dict(
patch_size=16, embed_dim=512, depth=12, num_heads=8, qkv_bias=False, fc_norm=True)
model = _create_vision_transformer_relpos(
'vit_relpos_medium_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_base_patch16_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position, no class token
"""
model_args = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False, fc_norm=True)
model = _create_vision_transformer_relpos(
'vit_relpos_base_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_srelpos_small_patch16_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ shared relative log-coord position, no class token
"""
model_args = dict(
patch_size=16, embed_dim=384, depth=12, num_heads=6, qkv_bias=False, fc_norm=False,
rel_pos_dim=384, shared_rel_pos=True)
model = _create_vision_transformer_relpos(
'vit_srelpos_small_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_srelpos_medium_patch16_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ shared relative log-coord position, no class token
"""
model_args = dict(
patch_size=16, embed_dim=512, depth=12, num_heads=8, qkv_bias=False, fc_norm=False,
rel_pos_dim=512, shared_rel_pos=True)
model = _create_vision_transformer_relpos(
'vit_srelpos_medium_patch16_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_medium_patch16_cls_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-M/16) w/ relative log-coord position, class token present
"""
model_args = dict(
patch_size=16, embed_dim=512, depth=12, num_heads=8, qkv_bias=False, fc_norm=False,
rel_pos_dim=256, class_token=True, global_pool='token')
model = _create_vision_transformer_relpos(
'vit_relpos_medium_patch16_cls_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_base_patch16_cls_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position, class token present
"""
model_args = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False, class_token=True, global_pool='token')
model = _create_vision_transformer_relpos(
'vit_relpos_base_patch16_cls_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_base_patch16_clsgap_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position, class token present
NOTE this config is a bit of a mistake, class token was enabled but global avg-pool w/ fc-norm was not disabled
Leaving here for comparisons w/ a future re-train as it performs quite well.
"""
model_args = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False, fc_norm=True, class_token=True)
model = _create_vision_transformer_relpos(
'vit_relpos_base_patch16_clsgap_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_small_patch16_rpn_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position and residual post-norm, no class token
"""
model_args = dict(
patch_size=16, embed_dim=384, depth=12, num_heads=6, qkv_bias=False, block_fn=ResPostRelPosBlock)
model = _create_vision_transformer_relpos(
'vit_relpos_small_patch16_rpn_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_medium_patch16_rpn_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position and residual post-norm, no class token
"""
model_args = dict(
patch_size=16, embed_dim=512, depth=12, num_heads=8, qkv_bias=False, block_fn=ResPostRelPosBlock)
model = _create_vision_transformer_relpos(
'vit_relpos_medium_patch16_rpn_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
@register_model
def vit_relpos_base_patch16_rpn_224(pretrained=False, **kwargs) -> VisionTransformerRelPos:
""" ViT-Base (ViT-B/16) w/ relative log-coord position and residual post-norm, no class token
"""
model_args = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, qkv_bias=False, block_fn=ResPostRelPosBlock)
model = _create_vision_transformer_relpos(
'vit_relpos_base_patch16_rpn_224', pretrained=pretrained, **dict(model_args, **kwargs))
return model
| pytorch-image-models/timm/models/vision_transformer_relpos.py/0 | {
"file_path": "pytorch-image-models/timm/models/vision_transformer_relpos.py",
"repo_id": "pytorch-image-models",
"token_count": 13346
} |
"""
AdamP Optimizer Implementation copied from https://github.com/clovaai/AdamP/blob/master/adamp/adamp.py
Paper: `Slowing Down the Weight Norm Increase in Momentum-based Optimizers` - https://arxiv.org/abs/2006.08217
Code: https://github.com/clovaai/AdamP
Copyright (c) 2020-present NAVER Corp.
MIT license
"""
import torch
import torch.nn.functional as F
from torch.optim.optimizer import Optimizer
import math
def _channel_view(x) -> torch.Tensor:
return x.reshape(x.size(0), -1)
def _layer_view(x) -> torch.Tensor:
return x.reshape(1, -1)
def projection(p, grad, perturb, delta: float, wd_ratio: float, eps: float):
wd = 1.
expand_size = (-1,) + (1,) * (len(p.shape) - 1)
for view_func in [_channel_view, _layer_view]:
param_view = view_func(p)
grad_view = view_func(grad)
cosine_sim = F.cosine_similarity(grad_view, param_view, dim=1, eps=eps).abs_()
# FIXME this is a problem for PyTorch XLA
if cosine_sim.max() < delta / math.sqrt(param_view.size(1)):
p_n = p / param_view.norm(p=2, dim=1).add_(eps).reshape(expand_size)
perturb -= p_n * view_func(p_n * perturb).sum(dim=1).reshape(expand_size)
wd = wd_ratio
return perturb, wd
return perturb, wd
class AdamP(Optimizer):
def __init__(
self,
params,
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=0,
delta=0.1,
wd_ratio=0.1,
nesterov=False,
):
defaults = dict(
lr=lr,
betas=betas,
eps=eps,
weight_decay=weight_decay,
delta=delta,
wd_ratio=wd_ratio,
nesterov=nesterov,
)
super(AdamP, self).__init__(params, defaults)
@torch.no_grad()
def step(self, closure=None):
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad
beta1, beta2 = group['betas']
nesterov = group['nesterov']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p)
state['exp_avg_sq'] = torch.zeros_like(p)
# Adam
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
state['step'] += 1
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
step_size = group['lr'] / bias_correction1
if nesterov:
perturb = (beta1 * exp_avg + (1 - beta1) * grad) / denom
else:
perturb = exp_avg / denom
# Projection
wd_ratio = 1.
if len(p.shape) > 1:
perturb, wd_ratio = projection(p, grad, perturb, group['delta'], group['wd_ratio'], group['eps'])
# Weight decay
if group['weight_decay'] > 0:
p.mul_(1. - group['lr'] * group['weight_decay'] * wd_ratio)
# Step
p.add_(perturb, alpha=-step_size)
return loss
| pytorch-image-models/timm/optim/adamp.py/0 | {
"file_path": "pytorch-image-models/timm/optim/adamp.py",
"repo_id": "pytorch-image-models",
"token_count": 2028
} |
"""RAdam Optimizer.
Implementation lifted from: https://github.com/LiyuanLucasLiu/RAdam
Paper: `On the Variance of the Adaptive Learning Rate and Beyond` - https://arxiv.org/abs/1908.03265
NOTE: This impl has been deprecated in favour of torch.optim.RAdam and remains as a reference
"""
import math
import torch
from torch.optim.optimizer import Optimizer
class RAdamLegacy(Optimizer):
""" PyTorch RAdam optimizer
NOTE: This impl has been deprecated in favour of torch.optim.AdamW and remains as a reference
"""
def __init__(
self,
params,
lr=1e-3,
betas=(0.9, 0.999),
eps=1e-8,
weight_decay=0,
):
defaults = dict(
lr=lr,
betas=betas,
eps=eps,
weight_decay=weight_decay,
buffer=[[None, None, None] for _ in range(10)]
)
super(RAdamLegacy, self).__init__(params, defaults)
def __setstate__(self, state):
super(RAdamLegacy, self).__setstate__(state)
@torch.no_grad()
def step(self, closure=None):
loss = None
if closure is not None:
with torch.enable_grad():
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.float()
if grad.is_sparse:
raise RuntimeError('RAdam does not support sparse gradients')
p_fp32 = p.float()
state = self.state[p]
if len(state) == 0:
state['step'] = 0
state['exp_avg'] = torch.zeros_like(p_fp32)
state['exp_avg_sq'] = torch.zeros_like(p_fp32)
else:
state['exp_avg'] = state['exp_avg'].type_as(p_fp32)
state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_fp32)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
beta1, beta2 = group['betas']
exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
state['step'] += 1
buffered = group['buffer'][int(state['step'] % 10)]
if state['step'] == buffered[0]:
num_sma, step_size = buffered[1], buffered[2]
else:
buffered[0] = state['step']
beta2_t = beta2 ** state['step']
num_sma_max = 2 / (1 - beta2) - 1
num_sma = num_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t)
buffered[1] = num_sma
# more conservative since it's an approximated value
if num_sma >= 5:
step_size = group['lr'] * math.sqrt(
(1 - beta2_t) *
(num_sma - 4) / (num_sma_max - 4) *
(num_sma - 2) / num_sma *
num_sma_max / (num_sma_max - 2)) / (1 - beta1 ** state['step'])
else:
step_size = group['lr'] / (1 - beta1 ** state['step'])
buffered[2] = step_size
if group['weight_decay'] != 0:
p_fp32.add_(p_fp32, alpha=-group['weight_decay'] * group['lr'])
# more conservative since it's an approximated value
if num_sma >= 5:
denom = exp_avg_sq.sqrt().add_(group['eps'])
p_fp32.addcdiv_(exp_avg, denom, value=-step_size)
else:
p_fp32.add_(exp_avg, alpha=-step_size)
p.copy_(p_fp32)
return loss
| pytorch-image-models/timm/optim/radam.py/0 | {
"file_path": "pytorch-image-models/timm/optim/radam.py",
"repo_id": "pytorch-image-models",
"token_count": 2159
} |
import fnmatch
import re
from collections import OrderedDict
from typing import Union, Optional, List
import torch
class AttentionExtract(torch.nn.Module):
# defaults should cover a significant number of timm models with attention maps.
default_node_names = ['*attn.softmax']
default_module_names = ['*attn_drop']
def __init__(
self,
model: Union[torch.nn.Module],
names: Optional[List[str]] = None,
mode: str = 'eval',
method: str = 'fx',
hook_type: str = 'forward',
use_regex: bool = False,
):
""" Extract attention maps (or other activations) from a model by name.
Args:
model: Instantiated model to extract from.
names: List of concrete or wildcard names to extract. Names are nodes for fx and modules for hooks.
mode: 'train' or 'eval' model mode.
method: 'fx' or 'hook' extraction method.
hook_type: 'forward' or 'forward_pre' hooks used.
use_regex: Use regex instead of fnmatch
"""
super().__init__()
assert mode in ('train', 'eval')
if mode == 'train':
model = model.train()
else:
model = model.eval()
assert method in ('fx', 'hook')
if method == 'fx':
# names are activation node names
from timm.models._features_fx import get_graph_node_names, GraphExtractNet
node_names = get_graph_node_names(model)[0 if mode == 'train' else 1]
names = names or self.default_node_names
if use_regex:
regexes = [re.compile(r) for r in names]
matched = [g for g in node_names if any([r.match(g) for r in regexes])]
else:
matched = [g for g in node_names if any([fnmatch.fnmatch(g, n) for n in names])]
if not matched:
raise RuntimeError(f'No node names found matching {names}.')
self.model = GraphExtractNet(model, matched, return_dict=True)
self.hooks = None
else:
# names are module names
assert hook_type in ('forward', 'forward_pre')
from timm.models._features import FeatureHooks
module_names = [n for n, m in model.named_modules()]
names = names or self.default_module_names
if use_regex:
regexes = [re.compile(r) for r in names]
matched = [m for m in module_names if any([r.match(m) for r in regexes])]
else:
matched = [m for m in module_names if any([fnmatch.fnmatch(m, n) for n in names])]
if not matched:
raise RuntimeError(f'No module names found matching {names}.')
self.model = model
self.hooks = FeatureHooks(matched, model.named_modules(), default_hook_type=hook_type)
self.names = matched
self.mode = mode
self.method = method
def forward(self, x):
if self.hooks is not None:
self.model(x)
output = self.hooks.get_output(device=x.device)
else:
output = self.model(x)
return output
| pytorch-image-models/timm/utils/attention_extract.py/0 | {
"file_path": "pytorch-image-models/timm/utils/attention_extract.py",
"repo_id": "pytorch-image-models",
"token_count": 1467
} |
#!/usr/bin/env python3
""" ImageNet Training Script
This is intended to be a lean and easily modifiable ImageNet training script that reproduces ImageNet
training results with some of the latest networks and training techniques. It favours canonical PyTorch
and standard Python style over trying to be able to 'do it all.' That said, it offers quite a few speed
and training result improvements over the usual PyTorch example scripts. Repurpose as you see fit.
This script was started from an early version of the PyTorch ImageNet example
(https://github.com/pytorch/examples/tree/master/imagenet)
NVIDIA CUDA specific speedups adopted from NVIDIA Apex examples
(https://github.com/NVIDIA/apex/tree/master/examples/imagenet)
Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman)
"""
import argparse
import copy
import importlib
import json
import logging
import os
import time
from collections import OrderedDict
from contextlib import suppress
from datetime import datetime
from functools import partial
import torch
import torch.nn as nn
import torchvision.utils
import yaml
from torch.nn.parallel import DistributedDataParallel as NativeDDP
from timm import utils
from timm.data import create_dataset, create_loader, resolve_data_config, Mixup, FastCollateMixup, AugMixDataset
from timm.layers import convert_splitbn_model, convert_sync_batchnorm, set_fast_norm
from timm.loss import JsdCrossEntropy, SoftTargetCrossEntropy, BinaryCrossEntropy, LabelSmoothingCrossEntropy
from timm.models import create_model, safe_model_name, resume_checkpoint, load_checkpoint, model_parameters
from timm.optim import create_optimizer_v2, optimizer_kwargs
from timm.scheduler import create_scheduler_v2, scheduler_kwargs
from timm.utils import ApexScaler, NativeScaler
try:
from apex import amp
from apex.parallel import DistributedDataParallel as ApexDDP
from apex.parallel import convert_syncbn_model
has_apex = True
except ImportError:
has_apex = False
try:
import wandb
has_wandb = True
except ImportError:
has_wandb = False
try:
from functorch.compile import memory_efficient_fusion
has_functorch = True
except ImportError as e:
has_functorch = False
has_compile = hasattr(torch, 'compile')
_logger = logging.getLogger('train')
# The first arg parser parses out only the --config argument, this argument is used to
# load a yaml file containing key-values that override the defaults for the main parser below
config_parser = parser = argparse.ArgumentParser(description='Training Config', add_help=False)
parser.add_argument('-c', '--config', default='', type=str, metavar='FILE',
help='YAML config file specifying default arguments')
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
# Dataset parameters
group = parser.add_argument_group('Dataset parameters')
# Keep this argument outside the dataset group because it is positional.
parser.add_argument('data', nargs='?', metavar='DIR', const=None,
help='path to dataset (positional is *deprecated*, use --data-dir)')
group.add_argument('--data-dir', metavar='DIR',
help='path to dataset (root dir)')
group.add_argument('--dataset', metavar='NAME', default='',
help='dataset type + name ("<type>/<name>") (default: ImageFolder or ImageTar if empty)')
group.add_argument('--train-split', metavar='NAME', default='train',
help='dataset train split (default: train)')
group.add_argument('--val-split', metavar='NAME', default='validation',
help='dataset validation split (default: validation)')
group.add_argument('--train-num-samples', default=None, type=int,
metavar='N', help='Manually specify num samples in train split, for IterableDatasets.')
group.add_argument('--val-num-samples', default=None, type=int,
metavar='N', help='Manually specify num samples in validation split, for IterableDatasets.')
group.add_argument('--dataset-download', action='store_true', default=False,
help='Allow download of dataset for torch/ and tfds/ datasets that support it.')
group.add_argument('--class-map', default='', type=str, metavar='FILENAME',
help='path to class to idx mapping file (default: "")')
group.add_argument('--input-img-mode', default=None, type=str,
help='Dataset image conversion mode for input images.')
group.add_argument('--input-key', default=None, type=str,
help='Dataset key for input images.')
group.add_argument('--target-key', default=None, type=str,
help='Dataset key for target labels.')
group.add_argument('--dataset-trust-remote-code', action='store_true', default=False,
help='Allow huggingface dataset import to execute code downloaded from the dataset\'s repo.')
# Model parameters
group = parser.add_argument_group('Model parameters')
group.add_argument('--model', default='resnet50', type=str, metavar='MODEL',
help='Name of model to train (default: "resnet50")')
group.add_argument('--pretrained', action='store_true', default=False,
help='Start with pretrained version of specified network (if avail)')
group.add_argument('--pretrained-path', default=None, type=str,
help='Load this checkpoint as if they were the pretrained weights (with adaptation).')
group.add_argument('--initial-checkpoint', default='', type=str, metavar='PATH',
help='Load this checkpoint into model after initialization (default: none)')
group.add_argument('--resume', default='', type=str, metavar='PATH',
help='Resume full model and optimizer state from checkpoint (default: none)')
group.add_argument('--no-resume-opt', action='store_true', default=False,
help='prevent resume of optimizer state when resuming model')
group.add_argument('--num-classes', type=int, default=None, metavar='N',
help='number of label classes (Model default if None)')
group.add_argument('--gp', default=None, type=str, metavar='POOL',
help='Global pool type, one of (fast, avg, max, avgmax, avgmaxc). Model default if None.')
group.add_argument('--img-size', type=int, default=None, metavar='N',
help='Image size (default: None => model default)')
group.add_argument('--in-chans', type=int, default=None, metavar='N',
help='Image input channels (default: None => 3)')
group.add_argument('--input-size', default=None, nargs=3, type=int, metavar='N',
help='Input all image dimensions (d h w, e.g. --input-size 3 224 224), uses model default if empty')
group.add_argument('--crop-pct', default=None, type=float,
metavar='N', help='Input image center crop percent (for validation only)')
group.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN',
help='Override mean pixel value of dataset')
group.add_argument('--std', type=float, nargs='+', default=None, metavar='STD',
help='Override std deviation of dataset')
group.add_argument('--interpolation', default='', type=str, metavar='NAME',
help='Image resize interpolation type (overrides model)')
group.add_argument('-b', '--batch-size', type=int, default=128, metavar='N',
help='Input batch size for training (default: 128)')
group.add_argument('-vb', '--validation-batch-size', type=int, default=None, metavar='N',
help='Validation batch size override (default: None)')
group.add_argument('--channels-last', action='store_true', default=False,
help='Use channels_last memory layout')
group.add_argument('--fuser', default='', type=str,
help="Select jit fuser. One of ('', 'te', 'old', 'nvfuser')")
group.add_argument('--grad-accum-steps', type=int, default=1, metavar='N',
help='The number of steps to accumulate gradients (default: 1)')
group.add_argument('--grad-checkpointing', action='store_true', default=False,
help='Enable gradient checkpointing through model blocks/stages')
group.add_argument('--fast-norm', default=False, action='store_true',
help='enable experimental fast-norm')
group.add_argument('--model-kwargs', nargs='*', default={}, action=utils.ParseKwargs)
group.add_argument('--head-init-scale', default=None, type=float,
help='Head initialization scale')
group.add_argument('--head-init-bias', default=None, type=float,
help='Head initialization bias value')
group.add_argument('--torchcompile-mode', type=str, default=None,
help="torch.compile mode (default: None).")
# scripting / codegen
scripting_group = group.add_mutually_exclusive_group()
scripting_group.add_argument('--torchscript', dest='torchscript', action='store_true',
help='torch.jit.script the full model')
scripting_group.add_argument('--torchcompile', nargs='?', type=str, default=None, const='inductor',
help="Enable compilation w/ specified backend (default: inductor).")
# Device & distributed
group = parser.add_argument_group('Device parameters')
group.add_argument('--device', default='cuda', type=str,
help="Device (accelerator) to use.")
group.add_argument('--amp', action='store_true', default=False,
help='use NVIDIA Apex AMP or Native AMP for mixed precision training')
group.add_argument('--amp-dtype', default='float16', type=str,
help='lower precision AMP dtype (default: float16)')
group.add_argument('--amp-impl', default='native', type=str,
help='AMP impl to use, "native" or "apex" (default: native)')
group.add_argument('--model-dtype', default=None, type=str,
help='Model dtype override (non-AMP) (default: float32)')
group.add_argument('--no-ddp-bb', action='store_true', default=False,
help='Force broadcast buffers for native DDP to off.')
group.add_argument('--synchronize-step', action='store_true', default=False,
help='torch.cuda.synchronize() end of each step')
group.add_argument("--local_rank", default=0, type=int)
group.add_argument('--device-modules', default=None, type=str, nargs='+',
help="Python imports for device backend modules.")
# Optimizer parameters
group = parser.add_argument_group('Optimizer parameters')
group.add_argument('--opt', default='sgd', type=str, metavar='OPTIMIZER',
help='Optimizer (default: "sgd")')
group.add_argument('--opt-eps', default=None, type=float, metavar='EPSILON',
help='Optimizer Epsilon (default: None, use opt default)')
group.add_argument('--opt-betas', default=None, type=float, nargs='+', metavar='BETA',
help='Optimizer Betas (default: None, use opt default)')
group.add_argument('--momentum', type=float, default=0.9, metavar='M',
help='Optimizer momentum (default: 0.9)')
group.add_argument('--weight-decay', type=float, default=2e-5,
help='weight decay (default: 2e-5)')
group.add_argument('--clip-grad', type=float, default=None, metavar='NORM',
help='Clip gradient norm (default: None, no clipping)')
group.add_argument('--clip-mode', type=str, default='norm',
help='Gradient clipping mode. One of ("norm", "value", "agc")')
group.add_argument('--layer-decay', type=float, default=None,
help='layer-wise learning rate decay (default: None)')
group.add_argument('--opt-kwargs', nargs='*', default={}, action=utils.ParseKwargs)
# Learning rate schedule parameters
group = parser.add_argument_group('Learning rate schedule parameters')
group.add_argument('--sched', type=str, default='cosine', metavar='SCHEDULER',
help='LR scheduler (default: "cosine"')
group.add_argument('--sched-on-updates', action='store_true', default=False,
help='Apply LR scheduler step on update instead of epoch end.')
group.add_argument('--lr', type=float, default=None, metavar='LR',
help='learning rate, overrides lr-base if set (default: None)')
group.add_argument('--lr-base', type=float, default=0.1, metavar='LR',
help='base learning rate: lr = lr_base * global_batch_size / base_size')
group.add_argument('--lr-base-size', type=int, default=256, metavar='DIV',
help='base learning rate batch size (divisor, default: 256).')
group.add_argument('--lr-base-scale', type=str, default='', metavar='SCALE',
help='base learning rate vs batch_size scaling ("linear", "sqrt", based on opt if empty)')
group.add_argument('--lr-noise', type=float, nargs='+', default=None, metavar='pct, pct',
help='learning rate noise on/off epoch percentages')
group.add_argument('--lr-noise-pct', type=float, default=0.67, metavar='PERCENT',
help='learning rate noise limit percent (default: 0.67)')
group.add_argument('--lr-noise-std', type=float, default=1.0, metavar='STDDEV',
help='learning rate noise std-dev (default: 1.0)')
group.add_argument('--lr-cycle-mul', type=float, default=1.0, metavar='MULT',
help='learning rate cycle len multiplier (default: 1.0)')
group.add_argument('--lr-cycle-decay', type=float, default=0.5, metavar='MULT',
help='amount to decay each learning rate cycle (default: 0.5)')
group.add_argument('--lr-cycle-limit', type=int, default=1, metavar='N',
help='learning rate cycle limit, cycles enabled if > 1')
group.add_argument('--lr-k-decay', type=float, default=1.0,
help='learning rate k-decay for cosine/poly (default: 1.0)')
group.add_argument('--warmup-lr', type=float, default=1e-5, metavar='LR',
help='warmup learning rate (default: 1e-5)')
group.add_argument('--min-lr', type=float, default=0, metavar='LR',
help='lower lr bound for cyclic schedulers that hit 0 (default: 0)')
group.add_argument('--epochs', type=int, default=300, metavar='N',
help='number of epochs to train (default: 300)')
group.add_argument('--epoch-repeats', type=float, default=0., metavar='N',
help='epoch repeat multiplier (number of times to repeat dataset epoch per train epoch).')
group.add_argument('--start-epoch', default=None, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
group.add_argument('--decay-milestones', default=[90, 180, 270], type=int, nargs='+', metavar="MILESTONES",
help='list of decay epoch indices for multistep lr. must be increasing')
group.add_argument('--decay-epochs', type=float, default=90, metavar='N',
help='epoch interval to decay LR')
group.add_argument('--warmup-epochs', type=int, default=5, metavar='N',
help='epochs to warmup LR, if scheduler supports')
group.add_argument('--warmup-prefix', action='store_true', default=False,
help='Exclude warmup period from decay schedule.'),
group.add_argument('--cooldown-epochs', type=int, default=0, metavar='N',
help='epochs to cooldown LR at min_lr, after cyclic schedule ends')
group.add_argument('--patience-epochs', type=int, default=10, metavar='N',
help='patience epochs for Plateau LR scheduler (default: 10)')
group.add_argument('--decay-rate', '--dr', type=float, default=0.1, metavar='RATE',
help='LR decay rate (default: 0.1)')
# Augmentation & regularization parameters
group = parser.add_argument_group('Augmentation and regularization parameters')
group.add_argument('--no-aug', action='store_true', default=False,
help='Disable all training augmentation, override other train aug args')
group.add_argument('--train-crop-mode', type=str, default=None,
help='Crop-mode in train'),
group.add_argument('--scale', type=float, nargs='+', default=[0.08, 1.0], metavar='PCT',
help='Random resize scale (default: 0.08 1.0)')
group.add_argument('--ratio', type=float, nargs='+', default=[3. / 4., 4. / 3.], metavar='RATIO',
help='Random resize aspect ratio (default: 0.75 1.33)')
group.add_argument('--hflip', type=float, default=0.5,
help='Horizontal flip training aug probability')
group.add_argument('--vflip', type=float, default=0.,
help='Vertical flip training aug probability')
group.add_argument('--color-jitter', type=float, default=0.4, metavar='PCT',
help='Color jitter factor (default: 0.4)')
group.add_argument('--color-jitter-prob', type=float, default=None, metavar='PCT',
help='Probability of applying any color jitter.')
group.add_argument('--grayscale-prob', type=float, default=None, metavar='PCT',
help='Probability of applying random grayscale conversion.')
group.add_argument('--gaussian-blur-prob', type=float, default=None, metavar='PCT',
help='Probability of applying gaussian blur.')
group.add_argument('--aa', type=str, default=None, metavar='NAME',
help='Use AutoAugment policy. "v0" or "original". (default: None)'),
group.add_argument('--aug-repeats', type=float, default=0,
help='Number of augmentation repetitions (distributed training only) (default: 0)')
group.add_argument('--aug-splits', type=int, default=0,
help='Number of augmentation splits (default: 0, valid: 0 or >=2)')
group.add_argument('--jsd-loss', action='store_true', default=False,
help='Enable Jensen-Shannon Divergence + CE loss. Use with `--aug-splits`.')
group.add_argument('--bce-loss', action='store_true', default=False,
help='Enable BCE loss w/ Mixup/CutMix use.')
group.add_argument('--bce-sum', action='store_true', default=False,
help='Sum over classes when using BCE loss.')
group.add_argument('--bce-target-thresh', type=float, default=None,
help='Threshold for binarizing softened BCE targets (default: None, disabled).')
group.add_argument('--bce-pos-weight', type=float, default=None,
help='Positive weighting for BCE loss.')
group.add_argument('--reprob', type=float, default=0., metavar='PCT',
help='Random erase prob (default: 0.)')
group.add_argument('--remode', type=str, default='pixel',
help='Random erase mode (default: "pixel")')
group.add_argument('--recount', type=int, default=1,
help='Random erase count (default: 1)')
group.add_argument('--resplit', action='store_true', default=False,
help='Do not random erase first (clean) augmentation split')
group.add_argument('--mixup', type=float, default=0.0,
help='mixup alpha, mixup enabled if > 0. (default: 0.)')
group.add_argument('--cutmix', type=float, default=0.0,
help='cutmix alpha, cutmix enabled if > 0. (default: 0.)')
group.add_argument('--cutmix-minmax', type=float, nargs='+', default=None,
help='cutmix min/max ratio, overrides alpha and enables cutmix if set (default: None)')
group.add_argument('--mixup-prob', type=float, default=1.0,
help='Probability of performing mixup or cutmix when either/both is enabled')
group.add_argument('--mixup-switch-prob', type=float, default=0.5,
help='Probability of switching to cutmix when both mixup and cutmix enabled')
group.add_argument('--mixup-mode', type=str, default='batch',
help='How to apply mixup/cutmix params. Per "batch", "pair", or "elem"')
group.add_argument('--mixup-off-epoch', default=0, type=int, metavar='N',
help='Turn off mixup after this epoch, disabled if 0 (default: 0)')
group.add_argument('--smoothing', type=float, default=0.1,
help='Label smoothing (default: 0.1)')
group.add_argument('--train-interpolation', type=str, default='random',
help='Training interpolation (random, bilinear, bicubic default: "random")')
group.add_argument('--drop', type=float, default=0.0, metavar='PCT',
help='Dropout rate (default: 0.)')
group.add_argument('--drop-connect', type=float, default=None, metavar='PCT',
help='Drop connect rate, DEPRECATED, use drop-path (default: None)')
group.add_argument('--drop-path', type=float, default=None, metavar='PCT',
help='Drop path rate (default: None)')
group.add_argument('--drop-block', type=float, default=None, metavar='PCT',
help='Drop block rate (default: None)')
# Batch norm parameters (only works with gen_efficientnet based models currently)
group = parser.add_argument_group('Batch norm parameters', 'Only works with gen_efficientnet based models currently.')
group.add_argument('--bn-momentum', type=float, default=None,
help='BatchNorm momentum override (if not None)')
group.add_argument('--bn-eps', type=float, default=None,
help='BatchNorm epsilon override (if not None)')
group.add_argument('--sync-bn', action='store_true',
help='Enable NVIDIA Apex or Torch synchronized BatchNorm.')
group.add_argument('--dist-bn', type=str, default='reduce',
help='Distribute BatchNorm stats between nodes after each epoch ("broadcast", "reduce", or "")')
group.add_argument('--split-bn', action='store_true',
help='Enable separate BN layers per augmentation split.')
# Model Exponential Moving Average
group = parser.add_argument_group('Model exponential moving average parameters')
group.add_argument('--model-ema', action='store_true', default=False,
help='Enable tracking moving average of model weights.')
group.add_argument('--model-ema-force-cpu', action='store_true', default=False,
help='Force ema to be tracked on CPU, rank=0 node only. Disables EMA validation.')
group.add_argument('--model-ema-decay', type=float, default=0.9998,
help='Decay factor for model weights moving average (default: 0.9998)')
group.add_argument('--model-ema-warmup', action='store_true',
help='Enable warmup for model EMA decay.')
# Misc
group = parser.add_argument_group('Miscellaneous parameters')
group.add_argument('--seed', type=int, default=42, metavar='S',
help='random seed (default: 42)')
group.add_argument('--worker-seeding', type=str, default='all',
help='worker seed mode (default: all)')
group.add_argument('--log-interval', type=int, default=50, metavar='N',
help='how many batches to wait before logging training status')
group.add_argument('--recovery-interval', type=int, default=0, metavar='N',
help='how many batches to wait before writing recovery checkpoint')
group.add_argument('--checkpoint-hist', type=int, default=10, metavar='N',
help='number of checkpoints to keep (default: 10)')
group.add_argument('-j', '--workers', type=int, default=4, metavar='N',
help='how many training processes to use (default: 4)')
group.add_argument('--save-images', action='store_true', default=False,
help='save images of input batches every log interval for debugging')
group.add_argument('--pin-mem', action='store_true', default=False,
help='Pin CPU memory in DataLoader for more efficient (sometimes) transfer to GPU.')
group.add_argument('--no-prefetcher', action='store_true', default=False,
help='disable fast prefetcher')
group.add_argument('--output', default='', type=str, metavar='PATH',
help='path to output folder (default: none, current dir)')
group.add_argument('--experiment', default='', type=str, metavar='NAME',
help='name of train experiment, name of sub-folder for output')
group.add_argument('--eval-metric', default='top1', type=str, metavar='EVAL_METRIC',
help='Best metric (default: "top1"')
group.add_argument('--tta', type=int, default=0, metavar='N',
help='Test/inference time augmentation (oversampling) factor. 0=None (default: 0)')
group.add_argument('--use-multi-epochs-loader', action='store_true', default=False,
help='use the multi-epochs-loader to save time at the beginning of every epoch')
group.add_argument('--log-wandb', action='store_true', default=False,
help='log training and validation metrics to wandb')
group.add_argument('--wandb-project', default=None, type=str,
help='wandb project name')
group.add_argument('--wandb-tags', default=[], type=str, nargs='+',
help='wandb tags')
group.add_argument('--wandb-resume-id', default='', type=str, metavar='ID',
help='If resuming a run, the id of the run in wandb')
def _parse_args():
# Do we have a config file to parse?
args_config, remaining = config_parser.parse_known_args()
if args_config.config:
with open(args_config.config, 'r') as f:
cfg = yaml.safe_load(f)
parser.set_defaults(**cfg)
# The main arg parser parses the rest of the args, the usual
# defaults will have been overridden if config file specified.
args = parser.parse_args(remaining)
# Cache the args as a text string to save them in the output dir later
args_text = yaml.safe_dump(args.__dict__, default_flow_style=False)
return args, args_text
def main():
utils.setup_default_logging()
args, args_text = _parse_args()
if args.device_modules:
for module in args.device_modules:
importlib.import_module(module)
if torch.cuda.is_available():
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = True
args.prefetcher = not args.no_prefetcher
args.grad_accum_steps = max(1, args.grad_accum_steps)
device = utils.init_distributed_device(args)
if args.distributed:
_logger.info(
'Training in distributed mode with multiple processes, 1 device per process.'
f'Process {args.rank}, total {args.world_size}, device {args.device}.')
else:
_logger.info(f'Training with a single process on 1 device ({args.device}).')
assert args.rank >= 0
model_dtype = None
if args.model_dtype:
assert args.model_dtype in ('float32', 'float16', 'bfloat16')
model_dtype = getattr(torch, args.model_dtype)
if model_dtype == torch.float16:
_logger.warning('float16 is not recommended for training, for half precision bfloat16 is recommended.')
# resolve AMP arguments based on PyTorch / Apex availability
use_amp = None
amp_dtype = torch.float16
if args.amp:
assert model_dtype is None or model_dtype == torch.float32, 'float32 model dtype must be used with AMP'
if args.amp_impl == 'apex':
assert has_apex, 'AMP impl specified as APEX but APEX is not installed.'
use_amp = 'apex'
assert args.amp_dtype == 'float16'
else:
use_amp = 'native'
assert args.amp_dtype in ('float16', 'bfloat16')
if args.amp_dtype == 'bfloat16':
amp_dtype = torch.bfloat16
utils.random_seed(args.seed, args.rank)
if args.fuser:
utils.set_jit_fuser(args.fuser)
if args.fast_norm:
set_fast_norm()
in_chans = 3
if args.in_chans is not None:
in_chans = args.in_chans
elif args.input_size is not None:
in_chans = args.input_size[0]
factory_kwargs = {}
if args.pretrained_path:
# merge with pretrained_cfg of model, 'file' has priority over 'url' and 'hf_hub'.
factory_kwargs['pretrained_cfg_overlay'] = dict(
file=args.pretrained_path,
num_classes=-1, # force head adaptation
)
model = create_model(
args.model,
pretrained=args.pretrained,
in_chans=in_chans,
num_classes=args.num_classes,
drop_rate=args.drop,
drop_path_rate=args.drop_path,
drop_block_rate=args.drop_block,
global_pool=args.gp,
bn_momentum=args.bn_momentum,
bn_eps=args.bn_eps,
scriptable=args.torchscript,
checkpoint_path=args.initial_checkpoint,
**factory_kwargs,
**args.model_kwargs,
)
if args.head_init_scale is not None:
with torch.no_grad():
model.get_classifier().weight.mul_(args.head_init_scale)
model.get_classifier().bias.mul_(args.head_init_scale)
if args.head_init_bias is not None:
nn.init.constant_(model.get_classifier().bias, args.head_init_bias)
if args.num_classes is None:
assert hasattr(model, 'num_classes'), 'Model must have `num_classes` attr if not set on cmd line/config.'
args.num_classes = model.num_classes # FIXME handle model default vs config num_classes more elegantly
if args.grad_checkpointing:
model.set_grad_checkpointing(enable=True)
if utils.is_primary(args):
_logger.info(
f'Model {safe_model_name(args.model)} created, param count:{sum([m.numel() for m in model.parameters()])}')
data_config = resolve_data_config(vars(args), model=model, verbose=utils.is_primary(args))
# setup augmentation batch splits for contrastive loss or split bn
num_aug_splits = 0
if args.aug_splits > 0:
assert args.aug_splits > 1, 'A split of 1 makes no sense'
num_aug_splits = args.aug_splits
# enable split bn (separate bn stats per batch-portion)
if args.split_bn:
assert num_aug_splits > 1 or args.resplit
model = convert_splitbn_model(model, max(num_aug_splits, 2))
# move model to GPU, enable channels last layout if set
model.to(device=device, dtype=model_dtype) # FIXME move model device & dtype into create_model
if args.channels_last:
model.to(memory_format=torch.channels_last)
# setup synchronized BatchNorm for distributed training
if args.distributed and args.sync_bn:
args.dist_bn = '' # disable dist_bn when sync BN active
assert not args.split_bn
if has_apex and use_amp == 'apex':
# Apex SyncBN used with Apex AMP
# WARNING this won't currently work with models using BatchNormAct2d
model = convert_syncbn_model(model)
else:
model = convert_sync_batchnorm(model)
if utils.is_primary(args):
_logger.info(
'Converted model to use Synchronized BatchNorm. WARNING: You may have issues if using '
'zero initialized BN layers (enabled by default for ResNets) while sync-bn enabled.')
if args.torchscript:
assert not args.torchcompile
assert not use_amp == 'apex', 'Cannot use APEX AMP with torchscripted model'
assert not args.sync_bn, 'Cannot use SyncBatchNorm with torchscripted model'
model = torch.jit.script(model)
if not args.lr:
global_batch_size = args.batch_size * args.world_size * args.grad_accum_steps
batch_ratio = global_batch_size / args.lr_base_size
if not args.lr_base_scale:
on = args.opt.lower()
args.lr_base_scale = 'sqrt' if any([o in on for o in ('ada', 'lamb')]) else 'linear'
if args.lr_base_scale == 'sqrt':
batch_ratio = batch_ratio ** 0.5
args.lr = args.lr_base * batch_ratio
if utils.is_primary(args):
_logger.info(
f'Learning rate ({args.lr}) calculated from base learning rate ({args.lr_base}) '
f'and effective global batch size ({global_batch_size}) with {args.lr_base_scale} scaling.')
optimizer = create_optimizer_v2(
model,
**optimizer_kwargs(cfg=args),
**args.opt_kwargs,
)
if utils.is_primary(args):
defaults = copy.deepcopy(optimizer.defaults)
defaults['weight_decay'] = args.weight_decay # this isn't stored in optimizer.defaults
defaults = ', '.join([f'{k}: {v}' for k, v in defaults.items()])
logging.info(
f'Created {type(optimizer).__name__} ({args.opt}) optimizer: {defaults}'
)
# setup automatic mixed-precision (AMP) loss scaling and op casting
amp_autocast = suppress # do nothing
loss_scaler = None
if use_amp == 'apex':
assert device.type == 'cuda'
model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
loss_scaler = ApexScaler()
if utils.is_primary(args):
_logger.info('Using NVIDIA APEX AMP. Training in mixed precision.')
elif use_amp == 'native':
amp_autocast = partial(torch.autocast, device_type=device.type, dtype=amp_dtype)
if device.type in ('cuda',) and amp_dtype == torch.float16:
# loss scaler only used for float16 (half) dtype, bfloat16 does not need it
loss_scaler = NativeScaler(device=device.type)
if utils.is_primary(args):
_logger.info('Using native Torch AMP. Training in mixed precision.')
else:
if utils.is_primary(args):
_logger.info(f'AMP not enabled. Training in {model_dtype or torch.float32}.')
# optionally resume from a checkpoint
resume_epoch = None
if args.resume:
resume_epoch = resume_checkpoint(
model,
args.resume,
optimizer=None if args.no_resume_opt else optimizer,
loss_scaler=None if args.no_resume_opt else loss_scaler,
log_info=utils.is_primary(args),
)
# setup exponential moving average of model weights, SWA could be used here too
model_ema = None
if args.model_ema:
# Important to create EMA model after cuda(), DP wrapper, and AMP but before DDP wrapper
model_ema = utils.ModelEmaV3(
model,
decay=args.model_ema_decay,
use_warmup=args.model_ema_warmup,
device='cpu' if args.model_ema_force_cpu else None,
)
if args.resume:
load_checkpoint(model_ema.module, args.resume, use_ema=True)
if args.torchcompile:
model_ema = torch.compile(model_ema, backend=args.torchcompile)
# setup distributed training
if args.distributed:
if has_apex and use_amp == 'apex':
# Apex DDP preferred unless native amp is activated
if utils.is_primary(args):
_logger.info("Using NVIDIA APEX DistributedDataParallel.")
model = ApexDDP(model, delay_allreduce=True)
else:
if utils.is_primary(args):
_logger.info("Using native Torch DistributedDataParallel.")
model = NativeDDP(model, device_ids=[device], broadcast_buffers=not args.no_ddp_bb)
# NOTE: EMA model does not need to be wrapped by DDP
if args.torchcompile:
# torch compile should be done after DDP
assert has_compile, 'A version of torch w/ torch.compile() is required for --compile, possibly a nightly.'
model = torch.compile(model, backend=args.torchcompile, mode=args.torchcompile_mode)
# create the train and eval datasets
if args.data and not args.data_dir:
args.data_dir = args.data
if args.input_img_mode is None:
input_img_mode = 'RGB' if data_config['input_size'][0] == 3 else 'L'
else:
input_img_mode = args.input_img_mode
dataset_train = create_dataset(
args.dataset,
root=args.data_dir,
split=args.train_split,
is_training=True,
class_map=args.class_map,
download=args.dataset_download,
batch_size=args.batch_size,
seed=args.seed,
repeats=args.epoch_repeats,
input_img_mode=input_img_mode,
input_key=args.input_key,
target_key=args.target_key,
num_samples=args.train_num_samples,
trust_remote_code=args.dataset_trust_remote_code,
)
if args.val_split:
dataset_eval = create_dataset(
args.dataset,
root=args.data_dir,
split=args.val_split,
is_training=False,
class_map=args.class_map,
download=args.dataset_download,
batch_size=args.batch_size,
input_img_mode=input_img_mode,
input_key=args.input_key,
target_key=args.target_key,
num_samples=args.val_num_samples,
trust_remote_code=args.dataset_trust_remote_code,
)
# setup mixup / cutmix
collate_fn = None
mixup_fn = None
mixup_active = args.mixup > 0 or args.cutmix > 0. or args.cutmix_minmax is not None
if mixup_active:
mixup_args = dict(
mixup_alpha=args.mixup,
cutmix_alpha=args.cutmix,
cutmix_minmax=args.cutmix_minmax,
prob=args.mixup_prob,
switch_prob=args.mixup_switch_prob,
mode=args.mixup_mode,
label_smoothing=args.smoothing,
num_classes=args.num_classes
)
if args.prefetcher:
assert not num_aug_splits # collate conflict (need to support de-interleaving in collate mixup)
collate_fn = FastCollateMixup(**mixup_args)
else:
mixup_fn = Mixup(**mixup_args)
# wrap dataset in AugMix helper
if num_aug_splits > 1:
dataset_train = AugMixDataset(dataset_train, num_splits=num_aug_splits)
# create data loaders w/ augmentation pipeline
train_interpolation = args.train_interpolation
if args.no_aug or not train_interpolation:
train_interpolation = data_config['interpolation']
loader_train = create_loader(
dataset_train,
input_size=data_config['input_size'],
batch_size=args.batch_size,
is_training=True,
no_aug=args.no_aug,
re_prob=args.reprob,
re_mode=args.remode,
re_count=args.recount,
re_split=args.resplit,
train_crop_mode=args.train_crop_mode,
scale=args.scale,
ratio=args.ratio,
hflip=args.hflip,
vflip=args.vflip,
color_jitter=args.color_jitter,
color_jitter_prob=args.color_jitter_prob,
grayscale_prob=args.grayscale_prob,
gaussian_blur_prob=args.gaussian_blur_prob,
auto_augment=args.aa,
num_aug_repeats=args.aug_repeats,
num_aug_splits=num_aug_splits,
interpolation=train_interpolation,
mean=data_config['mean'],
std=data_config['std'],
num_workers=args.workers,
distributed=args.distributed,
collate_fn=collate_fn,
pin_memory=args.pin_mem,
img_dtype=model_dtype or torch.float32,
device=device,
use_prefetcher=args.prefetcher,
use_multi_epochs_loader=args.use_multi_epochs_loader,
worker_seeding=args.worker_seeding,
)
loader_eval = None
if args.val_split:
eval_workers = args.workers
if args.distributed and ('tfds' in args.dataset or 'wds' in args.dataset):
# FIXME reduces validation padding issues when using TFDS, WDS w/ workers and distributed training
eval_workers = min(2, args.workers)
loader_eval = create_loader(
dataset_eval,
input_size=data_config['input_size'],
batch_size=args.validation_batch_size or args.batch_size,
is_training=False,
interpolation=data_config['interpolation'],
mean=data_config['mean'],
std=data_config['std'],
num_workers=eval_workers,
distributed=args.distributed,
crop_pct=data_config['crop_pct'],
pin_memory=args.pin_mem,
img_dtype=model_dtype or torch.float32,
device=device,
use_prefetcher=args.prefetcher,
)
# setup loss function
if args.jsd_loss:
assert num_aug_splits > 1 # JSD only valid with aug splits set
train_loss_fn = JsdCrossEntropy(num_splits=num_aug_splits, smoothing=args.smoothing)
elif mixup_active:
# smoothing is handled with mixup target transform which outputs sparse, soft targets
if args.bce_loss:
train_loss_fn = BinaryCrossEntropy(
target_threshold=args.bce_target_thresh,
sum_classes=args.bce_sum,
pos_weight=args.bce_pos_weight,
)
else:
train_loss_fn = SoftTargetCrossEntropy()
elif args.smoothing:
if args.bce_loss:
train_loss_fn = BinaryCrossEntropy(
smoothing=args.smoothing,
target_threshold=args.bce_target_thresh,
sum_classes=args.bce_sum,
pos_weight=args.bce_pos_weight,
)
else:
train_loss_fn = LabelSmoothingCrossEntropy(smoothing=args.smoothing)
else:
train_loss_fn = nn.CrossEntropyLoss()
train_loss_fn = train_loss_fn.to(device=device)
validate_loss_fn = nn.CrossEntropyLoss().to(device=device)
# setup checkpoint saver and eval metric tracking
eval_metric = args.eval_metric if loader_eval is not None else 'loss'
decreasing_metric = eval_metric == 'loss'
best_metric = None
best_epoch = None
saver = None
output_dir = None
if utils.is_primary(args):
if args.experiment:
exp_name = args.experiment
else:
exp_name = '-'.join([
datetime.now().strftime("%Y%m%d-%H%M%S"),
safe_model_name(args.model),
str(data_config['input_size'][-1])
])
output_dir = utils.get_outdir(args.output if args.output else './output/train', exp_name)
saver = utils.CheckpointSaver(
model=model,
optimizer=optimizer,
args=args,
model_ema=model_ema,
amp_scaler=loss_scaler,
checkpoint_dir=output_dir,
recovery_dir=output_dir,
decreasing=decreasing_metric,
max_history=args.checkpoint_hist
)
with open(os.path.join(output_dir, 'args.yaml'), 'w') as f:
f.write(args_text)
if args.log_wandb:
if has_wandb:
assert not args.wandb_resume_id or args.resume
wandb.init(
project=args.wandb_project,
name=exp_name,
config=args,
tags=args.wandb_tags,
resume="must" if args.wandb_resume_id else None,
id=args.wandb_resume_id if args.wandb_resume_id else None,
)
else:
_logger.warning(
"You've requested to log metrics to wandb but package not found. "
"Metrics not being logged to wandb, try `pip install wandb`")
# setup learning rate schedule and starting epoch
updates_per_epoch = (len(loader_train) + args.grad_accum_steps - 1) // args.grad_accum_steps
lr_scheduler, num_epochs = create_scheduler_v2(
optimizer,
**scheduler_kwargs(args, decreasing_metric=decreasing_metric),
updates_per_epoch=updates_per_epoch,
)
start_epoch = 0
if args.start_epoch is not None:
# a specified start_epoch will always override the resume epoch
start_epoch = args.start_epoch
elif resume_epoch is not None:
start_epoch = resume_epoch
if lr_scheduler is not None and start_epoch > 0:
if args.sched_on_updates:
lr_scheduler.step_update(start_epoch * updates_per_epoch)
else:
lr_scheduler.step(start_epoch)
if utils.is_primary(args):
if args.warmup_prefix:
sched_explain = '(warmup_epochs + epochs + cooldown_epochs). Warmup added to total when warmup_prefix=True'
else:
sched_explain = '(epochs + cooldown_epochs). Warmup within epochs when warmup_prefix=False'
_logger.info(
f'Scheduled epochs: {num_epochs} {sched_explain}. '
f'LR stepped per {"epoch" if lr_scheduler.t_in_epochs else "update"}.')
results = []
try:
for epoch in range(start_epoch, num_epochs):
if hasattr(dataset_train, 'set_epoch'):
dataset_train.set_epoch(epoch)
elif args.distributed and hasattr(loader_train.sampler, 'set_epoch'):
loader_train.sampler.set_epoch(epoch)
train_metrics = train_one_epoch(
epoch,
model,
loader_train,
optimizer,
train_loss_fn,
args,
lr_scheduler=lr_scheduler,
saver=saver,
output_dir=output_dir,
amp_autocast=amp_autocast,
loss_scaler=loss_scaler,
model_dtype=model_dtype,
model_ema=model_ema,
mixup_fn=mixup_fn,
num_updates_total=num_epochs * updates_per_epoch,
)
if args.distributed and args.dist_bn in ('broadcast', 'reduce'):
if utils.is_primary(args):
_logger.info("Distributing BatchNorm running means and vars")
utils.distribute_bn(model, args.world_size, args.dist_bn == 'reduce')
if loader_eval is not None:
eval_metrics = validate(
model,
loader_eval,
validate_loss_fn,
args,
device=device,
amp_autocast=amp_autocast,
model_dtype=model_dtype,
)
if model_ema is not None and not args.model_ema_force_cpu:
if args.distributed and args.dist_bn in ('broadcast', 'reduce'):
utils.distribute_bn(model_ema, args.world_size, args.dist_bn == 'reduce')
ema_eval_metrics = validate(
model_ema,
loader_eval,
validate_loss_fn,
args,
device=device,
amp_autocast=amp_autocast,
log_suffix=' (EMA)',
)
eval_metrics = ema_eval_metrics
else:
eval_metrics = None
if output_dir is not None:
lrs = [param_group['lr'] for param_group in optimizer.param_groups]
utils.update_summary(
epoch,
train_metrics,
eval_metrics,
filename=os.path.join(output_dir, 'summary.csv'),
lr=sum(lrs) / len(lrs),
write_header=best_metric is None,
log_wandb=args.log_wandb and has_wandb,
)
if eval_metrics is not None:
latest_metric = eval_metrics[eval_metric]
else:
latest_metric = train_metrics[eval_metric]
if saver is not None:
# save proper checkpoint with eval metric
best_metric, best_epoch = saver.save_checkpoint(epoch, metric=latest_metric)
if lr_scheduler is not None:
# step LR for next epoch
lr_scheduler.step(epoch + 1, latest_metric)
latest_results = {
'epoch': epoch,
'train': train_metrics,
}
if eval_metrics is not None:
latest_results['validation'] = eval_metrics
results.append(latest_results)
except KeyboardInterrupt:
pass
if best_metric is not None:
# log best metric as tracked by checkpoint saver
_logger.info('*** Best metric: {0} (epoch {1})'.format(best_metric, best_epoch))
if utils.is_primary(args):
# for parsable results display, dump top-10 summaries to avoid excess console spam
display_results = sorted(
results,
key=lambda x: x.get('validation', x.get('train')).get(eval_metric, 0),
reverse=decreasing_metric,
)
print(f'--result\n{json.dumps(display_results[-10:], indent=4)}')
def train_one_epoch(
epoch,
model,
loader,
optimizer,
loss_fn,
args,
device=torch.device('cuda'),
lr_scheduler=None,
saver=None,
output_dir=None,
amp_autocast=suppress,
loss_scaler=None,
model_dtype=None,
model_ema=None,
mixup_fn=None,
num_updates_total=None,
):
if args.mixup_off_epoch and epoch >= args.mixup_off_epoch:
if args.prefetcher and loader.mixup_enabled:
loader.mixup_enabled = False
elif mixup_fn is not None:
mixup_fn.mixup_enabled = False
second_order = hasattr(optimizer, 'is_second_order') and optimizer.is_second_order
has_no_sync = hasattr(model, "no_sync")
update_time_m = utils.AverageMeter()
data_time_m = utils.AverageMeter()
losses_m = utils.AverageMeter()
model.train()
accum_steps = args.grad_accum_steps
last_accum_steps = len(loader) % accum_steps
updates_per_epoch = (len(loader) + accum_steps - 1) // accum_steps
num_updates = epoch * updates_per_epoch
last_batch_idx = len(loader) - 1
last_batch_idx_to_accum = len(loader) - last_accum_steps
data_start_time = update_start_time = time.time()
optimizer.zero_grad()
update_sample_count = 0
for batch_idx, (input, target) in enumerate(loader):
last_batch = batch_idx == last_batch_idx
need_update = last_batch or (batch_idx + 1) % accum_steps == 0
update_idx = batch_idx // accum_steps
if batch_idx >= last_batch_idx_to_accum:
accum_steps = last_accum_steps
if not args.prefetcher:
input, target = input.to(device=device, dtype=model_dtype), target.to(device=device)
if mixup_fn is not None:
input, target = mixup_fn(input, target)
if args.channels_last:
input = input.contiguous(memory_format=torch.channels_last)
# multiply by accum steps to get equivalent for full update
data_time_m.update(accum_steps * (time.time() - data_start_time))
def _forward():
with amp_autocast():
output = model(input)
loss = loss_fn(output, target)
if accum_steps > 1:
loss /= accum_steps
return loss
def _backward(_loss):
if loss_scaler is not None:
loss_scaler(
_loss,
optimizer,
clip_grad=args.clip_grad,
clip_mode=args.clip_mode,
parameters=model_parameters(model, exclude_head='agc' in args.clip_mode),
create_graph=second_order,
need_update=need_update,
)
else:
_loss.backward(create_graph=second_order)
if need_update:
if args.clip_grad is not None:
utils.dispatch_clip_grad(
model_parameters(model, exclude_head='agc' in args.clip_mode),
value=args.clip_grad,
mode=args.clip_mode,
)
optimizer.step()
if has_no_sync and not need_update:
with model.no_sync():
loss = _forward()
_backward(loss)
else:
loss = _forward()
_backward(loss)
losses_m.update(loss.item() * accum_steps, input.size(0))
update_sample_count += input.size(0)
if not need_update:
data_start_time = time.time()
continue
num_updates += 1
optimizer.zero_grad()
if model_ema is not None:
model_ema.update(model, step=num_updates)
if args.synchronize_step:
if device.type == 'cuda':
torch.cuda.synchronize()
elif device.type == 'npu':
torch.npu.synchronize()
time_now = time.time()
update_time_m.update(time.time() - update_start_time)
update_start_time = time_now
if update_idx % args.log_interval == 0:
lrl = [param_group['lr'] for param_group in optimizer.param_groups]
lr = sum(lrl) / len(lrl)
loss_avg, loss_now = losses_m.avg, losses_m.val
if args.distributed:
# synchronize current step and avg loss, each process keeps its own running avg
loss_avg = utils.reduce_tensor(loss.new([loss_avg]), args.world_size).item()
loss_now = utils.reduce_tensor(loss.new([loss_now]), args.world_size).item()
update_sample_count *= args.world_size
if utils.is_primary(args):
_logger.info(
f'Train: {epoch} [{update_idx:>4d}/{updates_per_epoch} '
f'({100. * (update_idx + 1) / updates_per_epoch:>3.0f}%)] '
f'Loss: {loss_now:#.3g} ({loss_avg:#.3g}) '
f'Time: {update_time_m.val:.3f}s, {update_sample_count / update_time_m.val:>7.2f}/s '
f'({update_time_m.avg:.3f}s, {update_sample_count / update_time_m.avg:>7.2f}/s) '
f'LR: {lr:.3e} '
f'Data: {data_time_m.val:.3f} ({data_time_m.avg:.3f})'
)
if args.save_images and output_dir:
torchvision.utils.save_image(
input,
os.path.join(output_dir, 'train-batch-%d.jpg' % batch_idx),
padding=0,
normalize=True
)
if saver is not None and args.recovery_interval and (
(update_idx + 1) % args.recovery_interval == 0):
saver.save_recovery(epoch, batch_idx=update_idx)
if lr_scheduler is not None:
lr_scheduler.step_update(num_updates=num_updates, metric=losses_m.avg)
update_sample_count = 0
data_start_time = time.time()
# end for
if hasattr(optimizer, 'sync_lookahead'):
optimizer.sync_lookahead()
loss_avg = losses_m.avg
if args.distributed:
# synchronize avg loss, each process keeps its own running avg
loss_avg = torch.tensor([loss_avg], device=device, dtype=torch.float32)
loss_avg = utils.reduce_tensor(loss_avg, args.world_size).item()
return OrderedDict([('loss', loss_avg)])
def validate(
model,
loader,
loss_fn,
args,
device=torch.device('cuda'),
amp_autocast=suppress,
model_dtype=None,
log_suffix=''
):
batch_time_m = utils.AverageMeter()
losses_m = utils.AverageMeter()
top1_m = utils.AverageMeter()
top5_m = utils.AverageMeter()
model.eval()
end = time.time()
last_idx = len(loader) - 1
with torch.no_grad():
for batch_idx, (input, target) in enumerate(loader):
last_batch = batch_idx == last_idx
if not args.prefetcher:
input = input.to(device=device, dtype=model_dtype)
target = target.to(device=device)
if args.channels_last:
input = input.contiguous(memory_format=torch.channels_last)
with amp_autocast():
output = model(input)
if isinstance(output, (tuple, list)):
output = output[0]
# augmentation reduction
reduce_factor = args.tta
if reduce_factor > 1:
output = output.unfold(0, reduce_factor, reduce_factor).mean(dim=2)
target = target[0:target.size(0):reduce_factor]
loss = loss_fn(output, target)
acc1, acc5 = utils.accuracy(output, target, topk=(1, 5))
if args.distributed:
reduced_loss = utils.reduce_tensor(loss.data, args.world_size)
acc1 = utils.reduce_tensor(acc1, args.world_size)
acc5 = utils.reduce_tensor(acc5, args.world_size)
else:
reduced_loss = loss.data
if device.type == 'cuda':
torch.cuda.synchronize()
elif device.type == "npu":
torch.npu.synchronize()
losses_m.update(reduced_loss.item(), input.size(0))
top1_m.update(acc1.item(), output.size(0))
top5_m.update(acc5.item(), output.size(0))
batch_time_m.update(time.time() - end)
end = time.time()
if utils.is_primary(args) and (last_batch or batch_idx % args.log_interval == 0):
log_name = 'Test' + log_suffix
_logger.info(
f'{log_name}: [{batch_idx:>4d}/{last_idx}] '
f'Time: {batch_time_m.val:.3f} ({batch_time_m.avg:.3f}) '
f'Loss: {losses_m.val:>7.3f} ({losses_m.avg:>6.3f}) '
f'Acc@1: {top1_m.val:>7.3f} ({top1_m.avg:>7.3f}) '
f'Acc@5: {top5_m.val:>7.3f} ({top5_m.avg:>7.3f})'
)
metrics = OrderedDict([('loss', losses_m.avg), ('top1', top1_m.avg), ('top5', top5_m.avg)])
return metrics
if __name__ == '__main__':
main()
| pytorch-image-models/train.py/0 | {
"file_path": "pytorch-image-models/train.py",
"repo_id": "pytorch-image-models",
"token_count": 26015
} |
- title: Get started
sections:
- local: index
title: 🤗 Agents
- local: guided_tour
title: Guided tour
- title: Tutorials
sections:
- local: tutorials/building_good_agents
title: ✨ Building good agents
- local: tutorials/inspect_runs
title: 📊 Inspect your agent runs using telemetry
- local: tutorials/tools
title: 🛠️ Tools - in-depth guide
- local: tutorials/secure_code_execution
title: 🛡️ Secure your code execution with E2B
- title: Conceptual guides
sections:
- local: conceptual_guides/intro_agents
title: 🤖 An introduction to agentic systems
- local: conceptual_guides/react
title: 🤔 How do Multi-step agents work?
- title: Examples
sections:
- local: examples/text_to_sql
title: Self-correcting Text-to-SQL
- local: examples/rag
title: Master you knowledge base with agentic RAG
- local: examples/multiagents
title: Orchestrate a multi-agent system
- local: examples/web_browser
title: Build a web browser agent using vision models
- title: Reference
sections:
- local: reference/agents
title: Agent-related objects
- local: reference/models
title: Model-related objects
- local: reference/tools
title: Tool-related objects
| smolagents/docs/source/en/_toctree.yml/0 | {
"file_path": "smolagents/docs/source/en/_toctree.yml",
"repo_id": "smolagents",
"token_count": 395
} |
- title: 起步
sections:
- local: index
title: 🤗 Agents
- local: guided_tour
title: 导览
- title: Tutorials
sections:
- local: tutorials/building_good_agents
title: ✨ 构建好用的 agents
- local: tutorials/tools
title: 🛠️ 工具 - 深度指南
- local: tutorials/secure_code_execution
title: 🛡️ 使用 E2B 保护你的代码执行
- title: Conceptual guides
sections:
- local: conceptual_guides/intro_agents
title: 🤖 Agent 化系统介绍
- local: conceptual_guides/react
title: 🤔 多步骤 Agent 是如何工作的?
- title: Examples
sections:
- local: examples/text_to_sql
title: Self-correcting Text-to-SQL
- local: examples/rag
title: Master you knowledge base with agentic RAG
- local: examples/multiagents
title: Orchestrate a multi-agent system
- title: Reference
sections:
- local: reference/agents
title: Agent-related objects
- local: reference/tools
title: Tool-related objects
| smolagents/docs/source/zh/_toctree.yml/0 | {
"file_path": "smolagents/docs/source/zh/_toctree.yml",
"repo_id": "smolagents",
"token_count": 390
} |
<jupyter_start><jupyter_code>!pip install -e .. datasets sympy numpy matplotlib seaborn -q # Install dev version of smolagents + some packages<jupyter_output>[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m A new release of pip is available: [0m[31;49m23.2.1[0m[39;49m -> [0m[32;49m24.3.1[0m
[1m[[0m[34;49mnotice[0m[1;39;49m][0m[39;49m To update, run: [0m[32;49mpip install --upgrade pip[0m<jupyter_text>Constants and utilities/tools<jupyter_code># Benchmark date
# - set a concrete date:
DATE = "2024-12-26"
# - or use default: today
# DATE = None
# Evaluation dataset
EVAL_DATASET = "smolagents-benchmark/benchmark-v1"
# Answers dataset: it must be a gated dataset; required to score the answers
ANSWERS_DATASET = "smolagents-benchmark/answers"
# Whether to push the answers dataset to the Hub
PUSH_ANSWERS_DATASET_TO_HUB = True
# Results dataset
RESULTS_DATASET = "smolagents-benchmark/results"
# Whether to push the results dataset to the Hub
PUSH_RESULTS_DATASET_TO_HUB = True
import datetime
import json
import os
import re
import string
import time
import warnings
from typing import List
import datasets
from dotenv import load_dotenv
from tqdm import tqdm
from smolagents import (
AgentError,
CodeAgent,
GoogleSearchTool,
HfApiModel,
PythonInterpreterTool,
ToolCallingAgent,
VisitWebpageTool,
)
from smolagents.agents import ActionStep
load_dotenv()
os.makedirs("output", exist_ok=True)
def serialize_agent_error(obj):
if isinstance(obj, AgentError):
return {"error_type": obj.__class__.__name__, "message": obj.message}
else:
return str(obj)
def answer_questions(
eval_ds,
agent,
model_id,
action_type,
is_vanilla_llm=False,
date=DATE,
output_dir="output",
push_to_hub_dataset=ANSWERS_DATASET if PUSH_ANSWERS_DATASET_TO_HUB else None,
):
date = date or datetime.date.today().isoformat()
for task in eval_ds:
file_name = f"output/{model_id.replace('/', '__')}__{action_type}__{task}__{date}.jsonl"
answered_questions = []
if os.path.exists(file_name):
with open(file_name, "r") as f:
for line in f:
answered_questions.append(json.loads(line)["question"])
for _, example in tqdm(enumerate(eval_ds[task]), total=len(eval_ds[task])):
try:
question = example["question"]
if example["source"] == "SimpleQA":
question += " Answer with only the final number."
if example["source"] == "MATH":
question += " Write code, not latex."
if question in answered_questions:
continue
start_time = time.time()
if is_vanilla_llm:
llm = agent
answer = str(llm([{"role": "user", "content": question}]).content)
token_count = {
"input": llm.last_input_token_count,
"output": llm.last_output_token_count,
}
intermediate_steps = str([])
else:
answer = str(agent.run(question))
token_count = agent.monitor.get_total_token_counts()
intermediate_steps = str(agent.logs)
# Remove memory from logs to make them more compact.
for step in agent.logs:
if isinstance(step, ActionStep):
step.agent_memory = None
end_time = time.time()
annotated_example = {
"model_id": model_id,
"agent_action_type": action_type,
"question": question,
"answer": answer,
"true_answer": example["true_answer"],
"source": example["source"],
"intermediate_steps": intermediate_steps,
"start_time": start_time,
"end_time": end_time,
"token_counts": token_count,
}
with open(file_name, "a") as f:
json.dump(annotated_example, f, default=serialize_agent_error)
f.write("\n") # add a newline for JSONL format
except Exception as e:
print("Failed:", e)
if push_to_hub_dataset:
ds = datasets.Dataset.from_pandas(pd.read_json(file_name, lines=True), split="test", preserve_index=False)
config = f"{model_id.replace('/', '__')}__{action_type}__{task}"
data_dir = f"{model_id}/{action_type}/{task}/{date}"
ds.push_to_hub(
push_to_hub_dataset,
config_name=config,
data_dir=data_dir,
split="test",
commit_message=f"Upload {config}",
)
def normalize_number_str(number_str: str) -> float:
# we replace these common units and commas to allow
# conversion to float
for char in ["$", "%", ","]:
number_str = number_str.replace(char, "")
try:
return float(number_str)
except ValueError:
return float("inf")
def split_string(
s: str,
char_list: list[str] = [",", ";"],
) -> list[str]:
pattern = f"[{''.join(char_list)}]"
return re.split(pattern, s)
def is_float(element: any) -> bool:
try:
float(element)
return True
except ValueError:
return False
def normalize_str(input_str, remove_punct=True) -> str:
"""
Normalize a string by:
- Removing all white spaces
- Optionally removing punctuation (if remove_punct is True)
- Converting to lowercase
Parameters:
- input_str: str, the string to normalize
- remove_punct: bool, whether to remove punctuation (default: True)
Returns:
- str, the normalized string
"""
# Remove all white spaces. Required e.g for seagull vs. sea gull
no_spaces = re.sub(r"\s", "", input_str)
# Remove punctuation, if specified.
if remove_punct:
translator = str.maketrans("", "", string.punctuation)
return no_spaces.lower().translate(translator)
else:
return no_spaces.lower()
def extract_numbers(text: str) -> List[str]:
"""This pattern matches:
- Optional negative sign
- Numbers with optional comma thousand separators
- Optional decimal points with decimal numbers
"""
pattern = r"-?(?:\d{1,3}(?:,\d{3})+|\d+)(?:\.\d+)?"
return [el.replace(",", "") for el in re.findall(pattern, text)]
def get_question_score_gaia(
model_answer: str,
ground_truth: str,
) -> bool:
"""Scoring function used to score functions from the GAIA benchmark"""
if is_float(ground_truth):
normalized_answer = normalize_number_str(str(model_answer))
return normalized_answer == float(ground_truth)
elif any(char in ground_truth for char in [",", ";"]): # if gt is a list
# question with the fish: normalization removes punct
gt_elems = split_string(ground_truth)
ma_elems = split_string(model_answer)
if len(gt_elems) != len(ma_elems): # check length is the same
warnings.warn("Answer lists have different lengths, returning False.", UserWarning)
return False
comparisons = []
for ma_elem, gt_elem in zip(ma_elems, gt_elems): # compare each element as float or str
if is_float(gt_elem):
normalized_ma_elem = normalize_number_str(ma_elem)
comparisons.append(normalized_ma_elem == float(gt_elem))
else:
# we do not remove punct since comparisons can include punct
comparisons.append(
normalize_str(ma_elem, remove_punct=False) == normalize_str(gt_elem, remove_punct=False)
)
return all(comparisons)
else: # if gt is a str
return normalize_str(model_answer) == normalize_str(ground_truth)
def get_correct(row):
if row["source"] == "MATH": # Checks the last number in answer
numbers_answer = extract_numbers(str(row["answer"]))
if len(numbers_answer) == 0:
return False
return float(numbers_answer[-1]) == float(row["true_answer"])
else:
return get_question_score_gaia(str(row["answer"]), str(row["true_answer"]))
def score_answers(
answers_subsets,
answers_dataset=ANSWERS_DATASET,
date=DATE,
push_to_hub_dataset=RESULTS_DATASET if PUSH_RESULTS_DATASET_TO_HUB else None,
set_default=True,
):
if not answers_dataset:
raise ValueError("Pass 'answers_dataset' to load the answers from it")
date = date or datetime.date.today().isoformat()
results = []
for answers_subset in answers_subsets:
*model_id, action_type, task = answers_subset.split("__")
model_id = "/".join(model_id)
ds = datasets.load_dataset(answers_dataset, answers_subset, split="test")
df = ds.to_pandas()
df["correct"] = df.apply(get_correct, axis=1)
acc = df["correct"].mean().item()
result = df.loc[0, ["model_id", "agent_action_type", "source"]].to_dict()
result["acc"] = acc
results.append(result)
df = pd.DataFrame(results)
if push_to_hub_dataset:
ds = datasets.Dataset.from_pandas(df)
config = date
set_default = set_default
ds.push_to_hub(
push_to_hub_dataset, config_name=config, set_default=set_default, commit_message=f"Upload {config} results"
)
return df<jupyter_output><empty_output><jupyter_text>Evaluation dataset<jupyter_code>import pandas as pd
# Choose the tasks to evaluate on:
# tasks = ["gaia"]
# or evaluate on all tasks: ["gaia", "math", "simpleqa"]
tasks = datasets.get_dataset_config_names(EVAL_DATASET)
print(tasks)
eval_ds = {task: datasets.load_dataset(EVAL_DATASET, task, split="test") for task in tasks}
pd.DataFrame(eval_ds["simpleqa"]).head()<jupyter_output>['gaia', 'math', 'simpleqa']<jupyter_text>Benchmark agents Open models<jupyter_code>open_model_ids = [
"meta-llama/Llama-3.3-70B-Instruct",
# "Qwen/QwQ-32B-Preview",
"Qwen/Qwen2.5-72B-Instruct",
"Qwen/Qwen2.5-Coder-32B-Instruct",
"meta-llama/Llama-3.2-3B-Instruct",
"meta-llama/Llama-3.1-8B-Instruct",
"mistralai/Mistral-Nemo-Instruct-2407",
# "HuggingFaceTB/SmolLM2-1.7B-Instruct",
# "meta-llama/Llama-3.1-70B-Instruct",
]
for model_id in open_model_ids:
print(f"Evaluating '{model_id}'...")
# action_type = "tool-calling"
# agent = ToolCallingAgent(
# tools=[GoogleSearchTool(), VisitWebpageTool(), PythonInterpreterTool()],
# model=HfApiModel(model_id),
# max_steps=10,
# )
# answer_questions(eval_ds, agent, model_id, action_type)
action_type = "code"
agent = CodeAgent(
tools=[GoogleSearchTool(), VisitWebpageTool()],
model=HfApiModel(model_id),
additional_authorized_imports=["numpy", "sympy"],
max_steps=10,
)
answer_questions(eval_ds, agent, model_id, action_type)
# Also evaluate vanilla model
action_type = "vanilla"
llm = HfApiModel(model_id)
answer_questions(eval_ds, llm, model_id, action_type, is_vanilla_llm=True)<jupyter_output><empty_output><jupyter_text>Closed models<jupyter_code>from smolagents import LiteLLMModel
litellm_model_ids = ["gpt-4o", "anthropic/claude-3-5-sonnet-latest"]
for model_id in litellm_model_ids:
print(f"Evaluating '{model_id}'...")
action_type = "tool-calling"
agent = ToolCallingAgent(
tools=[
GoogleSearchTool(),
VisitWebpageTool(),
PythonInterpreterTool(["numpy", "sympy"]),
],
model=LiteLLMModel(model_id),
max_steps=10,
)
answer_questions(eval_ds, agent, model_id, action_type)
action_type = "code"
agent = CodeAgent(
tools=[GoogleSearchTool(), VisitWebpageTool()],
model=LiteLLMModel(model_id),
additional_authorized_imports=["numpy", "sympy"],
max_steps=10,
)
answer_questions(eval_ds, agent, model_id, action_type)
# Also evaluate vanilla model
action_type = "vanilla"
llm = LiteLLMModel(model_id)
answer_questions(eval_ds, llm, model_id, action_type, is_vanilla_llm=True)
# import glob
# import json
# jsonl_files = glob.glob(f"output/*.jsonl")
# for file_path in jsonl_files:
# if "-Nemo-" in file_path and "-vanilla-" in file_path:
# print(file_path)
# # Read all lines and filter out SimpleQA sources
# filtered_lines = []
# removed = 0
# with open(file_path, "r", encoding="utf-8") as f:
# for line in f:
# try:
# data = json.loads(line.strip())
# data["answer"] = data["answer"]["content"]
# # if not any([question in data["question"] for question in eval_ds["question"]]):
# # removed +=1
# # else:
# filtered_lines.append(json.dumps(data) + "\n")
# except json.JSONDecodeError:
# print("Invalid line:", line)
# continue # Skip invalid JSON lines
# print(f"Removed {removed} lines.")
# # Write filtered content back to the same file
# with open(
# str(file_path).replace("-vanilla-", "-vanilla2-"), "w", encoding="utf-8"
# ) as f:
# f.writelines(filtered_lines)<jupyter_output><empty_output><jupyter_text>Score answers<jupyter_code>import datasets
import pandas as pd
# Choose the answers subsets to score:
# answers_subsets = ["meta-llama__Llama-3.1-8B-Instruct__code__gaia"]
# or get all the answers subsets present in the ANSWERS_DATASET
answers_subsets = datasets.get_dataset_config_names(ANSWERS_DATASET)
print("Number of answers_subsets", len(answers_subsets))
print("Example of answers_subset", answers_subsets[0])
result_df = score_answers(answers_subsets)
result_df["acc"] = (result_df["acc"] * 100).round(2)
result_df.head()
pivot_df = result_df.pivot_table(
index=["model_id", "source"],
columns=["action_type"],
values="correct",
fill_value=float("nan"),
).reset_index()<jupyter_output><empty_output><jupyter_text>Display results<jupyter_code>display(pivot_df)
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.legend_handler import HandlerTuple # Added import
# Assuming pivot_df is your original dataframe
models = pivot_df["model_id"].unique()
sources = pivot_df["source"].unique()
# Create figure and axis
plt.style.use("seaborn-v0_8-white")
fig, ax = plt.subplots(figsize=(15, 6))
# Set the width of each bar group and positions of the bars
width = 0.15 # width of each bar
spacing = 0.02 # space between bars within a group
group_spacing = 0.2 # space between model groups
# Calculate positions for the bars
num_sources = len(sources)
total_width_per_group = (width + spacing) * num_sources * 2 # *2 for agent and vanilla
x = np.arange(len(models)) * (total_width_per_group + group_spacing)
# Plot bars for each source
for i, source in enumerate(sources):
source_data = pivot_df[pivot_df["source"] == source]
agent_scores = [
source_data[source_data["model_id"] == model]["code"].values[0]
if len(source_data[source_data["model_id"] == model]) > 0
else np.nan
for model in models
]
vanilla_scores = [
source_data[source_data["model_id"] == model]["vanilla"].values[0]
if len(source_data[source_data["model_id"] == model]) > 0
else np.nan
for model in models
]
# Position calculation for each pair of bars
pos = x + i * (width * 2 + spacing)
agent_bars = ax.bar(pos, agent_scores, width, label=f"{source} (Agent)", alpha=0.8)
vanilla_bars = ax.bar(
pos + width * 0.6,
vanilla_scores,
width,
hatch="////",
alpha=0.5,
hatch_linewidth=2,
label=f"{source} (Vanilla)",
color="white",
edgecolor=agent_bars[0].get_facecolor(),
)
# Customize the plot
ax.set_ylabel("Score")
ax.set_title("Model Performance Comparison")
# Set x-axis ticks in the middle of each group
group_centers = x + (total_width_per_group - spacing) / 2
ax.set_xticks(group_centers)
# Wrap long model names to prevent overlap
wrapped_labels = ["\n".join(model.split("/")) for model in models]
ax.set_xticklabels(wrapped_labels, rotation=0, ha="center")
# Modify legend to combine agent and vanilla entries
handles, labels = ax.get_legend_handles_labels()
unique_sources = sources
legend_elements = [
(handles[i * 2], handles[i * 2 + 1], labels[i * 2].replace(" (Agent)", "")) for i in range(len(unique_sources))
]
custom_legend = ax.legend(
[(agent_handle, vanilla_handle) for agent_handle, vanilla_handle, _ in legend_elements],
[label for _, _, label in legend_elements],
handler_map={tuple: HandlerTuple(ndivide=None)},
bbox_to_anchor=(1.05, 1),
loc="upper left",
)
ax.yaxis.grid(True, linestyle="--", alpha=0.3)
ax.set_ylim(bottom=0)
plt.tight_layout()
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
plt.show()
def create_mathjax_table(pivot_df, formatted_df):
# Start the matrix environment with 4 columns
# l for left-aligned model and task, c for centered numbers
mathjax_table = "\\begin{array}{llcc}\n"
mathjax_table += "\\text{Model} & \\text{Task} & \\text{Agent} & \\text{Vanilla} \\\\\n"
mathjax_table += "\\hline\n"
# Sort the DataFrame by model_id and source
formatted_df = formatted_df.sort_values(["model_id", "source"])
current_model = None
for _, row in formatted_df.iterrows():
model = row["model_id"]
source = row["source"]
# Add a horizontal line between different models
if current_model is not None and current_model != model:
mathjax_table += "\\hline\n"
# Format model name
model_display = model.replace("_", "\\_")
if "Qwen" in model or "anthropic" in model:
model_display = f"\\textit{{{model_display}}}"
# If it's the same model as previous row, use empty space
if current_model == model:
model_display = "\\;"
# Add the data row
mathjax_table += f"{model_display} & {source} & {row['agent']} & {row['vanilla']} \\\\\n"
current_model = model
mathjax_table += "\\hline\n"
mathjax_table += "\\end{array}"
return mathjax_table
# Usage (after running your previous data processing code):
# mathjax_table = create_mathjax_table(pivot_df, formatted_df)
# print(mathjax_table)<jupyter_output><empty_output> | smolagents/examples/benchmark.ipynb/0 | {
"file_path": "smolagents/examples/benchmark.ipynb",
"repo_id": "smolagents",
"token_count": 8351
} |
import base64
import json
import mimetypes
import os
import uuid
from io import BytesIO
from typing import Optional
import requests
from dotenv import load_dotenv
from huggingface_hub import InferenceClient
from PIL import Image
from transformers import AutoProcessor
from smolagents import Tool, tool
load_dotenv(override=True)
idefics_processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b-chatty")
def process_images_and_text(image_path, query, client):
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": query},
],
},
]
prompt_with_template = idefics_processor.apply_chat_template(messages, add_generation_prompt=True)
# load images from local directory
# encode images to strings which can be sent to the endpoint
def encode_local_image(image_path):
# load image
image = Image.open(image_path).convert("RGB")
# Convert the image to a base64 string
buffer = BytesIO()
image.save(buffer, format="JPEG") # Use the appropriate format (e.g., JPEG, PNG)
base64_image = base64.b64encode(buffer.getvalue()).decode("utf-8")
# add string formatting required by the endpoint
image_string = f"data:image/jpeg;base64,{base64_image}"
return image_string
image_string = encode_local_image(image_path)
prompt_with_images = prompt_with_template.replace("<image>", " ").format(image_string)
payload = {
"inputs": prompt_with_images,
"parameters": {
"return_full_text": False,
"max_new_tokens": 200,
},
}
return json.loads(client.post(json=payload).decode())[0]
# Function to encode the image
def encode_image(image_path):
if image_path.startswith("http"):
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0"
request_kwargs = {
"headers": {"User-Agent": user_agent},
"stream": True,
}
# Send a HTTP request to the URL
response = requests.get(image_path, **request_kwargs)
response.raise_for_status()
content_type = response.headers.get("content-type", "")
extension = mimetypes.guess_extension(content_type)
if extension is None:
extension = ".download"
fname = str(uuid.uuid4()) + extension
download_path = os.path.abspath(os.path.join("downloads", fname))
with open(download_path, "wb") as fh:
for chunk in response.iter_content(chunk_size=512):
fh.write(chunk)
image_path = download_path
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
headers = {"Content-Type": "application/json", "Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}"}
def resize_image(image_path):
img = Image.open(image_path)
width, height = img.size
img = img.resize((int(width / 2), int(height / 2)))
new_image_path = f"resized_{image_path}"
img.save(new_image_path)
return new_image_path
class VisualQATool(Tool):
name = "visualizer"
description = "A tool that can answer questions about attached images."
inputs = {
"image_path": {
"description": "The path to the image on which to answer the question",
"type": "string",
},
"question": {"description": "the question to answer", "type": "string", "nullable": True},
}
output_type = "string"
client = InferenceClient("HuggingFaceM4/idefics2-8b-chatty")
def forward(self, image_path: str, question: Optional[str] = None) -> str:
output = ""
add_note = False
if not question:
add_note = True
question = "Please write a detailed caption for this image."
try:
output = process_images_and_text(image_path, question, self.client)
except Exception as e:
print(e)
if "Payload Too Large" in str(e):
new_image_path = resize_image(image_path)
output = process_images_and_text(new_image_path, question, self.client)
if add_note:
output = (
f"You did not provide a particular question, so here is a detailed caption for the image: {output}"
)
return output
@tool
def visualizer(image_path: str, question: Optional[str] = None) -> str:
"""A tool that can answer questions about attached images.
Args:
image_path: The path to the image on which to answer the question. This should be a local path to downloaded image.
question: The question to answer.
"""
add_note = False
if not question:
add_note = True
question = "Please write a detailed caption for this image."
if not isinstance(image_path, str):
raise Exception("You should provide at least `image_path` string argument to this tool!")
mime_type, _ = mimetypes.guess_type(image_path)
base64_image = encode_image(image_path)
payload = {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": question},
{"type": "image_url", "image_url": {"url": f"data:{mime_type};base64,{base64_image}"}},
],
}
],
"max_tokens": 1000,
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
try:
output = response.json()["choices"][0]["message"]["content"]
except Exception:
raise Exception(f"Response format unexpected: {response.json()}")
if add_note:
output = f"You did not provide a particular question, so here is a detailed caption for the image: {output}"
return output
| smolagents/examples/open_deep_research/scripts/visual_qa.py/0 | {
"file_path": "smolagents/examples/open_deep_research/scripts/visual_qa.py",
"repo_id": "smolagents",
"token_count": 2519
} |
#!/usr/bin/env python
# coding=utf-8
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
import os
import random
from copy import deepcopy
from dataclasses import asdict, dataclass
from enum import Enum
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
from huggingface_hub import InferenceClient
from huggingface_hub.utils import is_torch_available
from PIL import Image
from .tools import Tool
from .utils import _is_package_available, encode_image_base64, make_image_url
if TYPE_CHECKING:
from transformers import StoppingCriteriaList
logger = logging.getLogger(__name__)
DEFAULT_JSONAGENT_REGEX_GRAMMAR = {
"type": "regex",
"value": 'Thought: .+?\\nAction:\\n\\{\\n\\s{4}"action":\\s"[^"\\n]+",\\n\\s{4}"action_input":\\s"[^"\\n]+"\\n\\}\\n<end_code>',
}
DEFAULT_CODEAGENT_REGEX_GRAMMAR = {
"type": "regex",
"value": "Thought: .+?\\nCode:\\n```(?:py|python)?\\n(?:.|\\s)+?\\n```<end_code>",
}
def get_dict_from_nested_dataclasses(obj, ignore_key=None):
def convert(obj):
if hasattr(obj, "__dataclass_fields__"):
return {k: convert(v) for k, v in asdict(obj).items() if k != ignore_key}
return obj
return convert(obj)
@dataclass
class ChatMessageToolCallDefinition:
arguments: Any
name: str
description: Optional[str] = None
@classmethod
def from_hf_api(cls, tool_call_definition) -> "ChatMessageToolCallDefinition":
return cls(
arguments=tool_call_definition.arguments,
name=tool_call_definition.name,
description=tool_call_definition.description,
)
@dataclass
class ChatMessageToolCall:
function: ChatMessageToolCallDefinition
id: str
type: str
@classmethod
def from_hf_api(cls, tool_call) -> "ChatMessageToolCall":
return cls(
function=ChatMessageToolCallDefinition.from_hf_api(tool_call.function),
id=tool_call.id,
type=tool_call.type,
)
@dataclass
class ChatMessage:
role: str
content: Optional[str] = None
tool_calls: Optional[List[ChatMessageToolCall]] = None
raw: Optional[Any] = None # Stores the raw output from the API
def model_dump_json(self):
return json.dumps(get_dict_from_nested_dataclasses(self, ignore_key="raw"))
@classmethod
def from_hf_api(cls, message, raw) -> "ChatMessage":
tool_calls = None
if getattr(message, "tool_calls", None) is not None:
tool_calls = [ChatMessageToolCall.from_hf_api(tool_call) for tool_call in message.tool_calls]
return cls(role=message.role, content=message.content, tool_calls=tool_calls, raw=raw)
@classmethod
def from_dict(cls, data: dict) -> "ChatMessage":
if data.get("tool_calls"):
tool_calls = [
ChatMessageToolCall(
function=ChatMessageToolCallDefinition(**tc["function"]), id=tc["id"], type=tc["type"]
)
for tc in data["tool_calls"]
]
data["tool_calls"] = tool_calls
return cls(**data)
def dict(self):
return json.dumps(get_dict_from_nested_dataclasses(self))
def parse_json_if_needed(arguments: Union[str, dict]) -> Union[str, dict]:
if isinstance(arguments, dict):
return arguments
else:
try:
return json.loads(arguments)
except Exception:
return arguments
def parse_tool_args_if_needed(message: ChatMessage) -> ChatMessage:
for tool_call in message.tool_calls:
tool_call.function.arguments = parse_json_if_needed(tool_call.function.arguments)
return message
class MessageRole(str, Enum):
USER = "user"
ASSISTANT = "assistant"
SYSTEM = "system"
TOOL_CALL = "tool-call"
TOOL_RESPONSE = "tool-response"
@classmethod
def roles(cls):
return [r.value for r in cls]
tool_role_conversions = {
MessageRole.TOOL_CALL: MessageRole.ASSISTANT,
MessageRole.TOOL_RESPONSE: MessageRole.USER,
}
def get_tool_json_schema(tool: Tool) -> Dict:
properties = deepcopy(tool.inputs)
required = []
for key, value in properties.items():
if value["type"] == "any":
value["type"] = "string"
if not ("nullable" in value and value["nullable"]):
required.append(key)
return {
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": {
"type": "object",
"properties": properties,
"required": required,
},
},
}
def remove_stop_sequences(content: str, stop_sequences: List[str]) -> str:
for stop_seq in stop_sequences:
if content[-len(stop_seq) :] == stop_seq:
content = content[: -len(stop_seq)]
return content
def get_clean_message_list(
message_list: List[Dict[str, str]],
role_conversions: Dict[MessageRole, MessageRole] = {},
convert_images_to_image_urls: bool = False,
flatten_messages_as_text: bool = False,
) -> List[Dict[str, str]]:
"""
Subsequent messages with the same role will be concatenated to a single message.
output_message_list is a list of messages that will be used to generate the final message that is chat template compatible with transformers LLM chat template.
Args:
message_list (`list[dict[str, str]]`): List of chat messages.
role_conversions (`dict[MessageRole, MessageRole]`, *optional* ): Mapping to convert roles.
convert_images_to_image_urls (`bool`, default `False`): Whether to convert images to image URLs.
flatten_messages_as_text (`bool`, default `False`): Whether to flatten messages as text.
"""
output_message_list = []
message_list = deepcopy(message_list) # Avoid modifying the original list
for message in message_list:
role = message["role"]
if role not in MessageRole.roles():
raise ValueError(f"Incorrect role {role}, only {MessageRole.roles()} are supported for now.")
if role in role_conversions:
message["role"] = role_conversions[role]
# encode images if needed
if isinstance(message["content"], list):
for element in message["content"]:
if element["type"] == "image":
assert not flatten_messages_as_text, f"Cannot use images with {flatten_messages_as_text=}"
if convert_images_to_image_urls:
element.update(
{
"type": "image_url",
"image_url": {"url": make_image_url(encode_image_base64(element.pop("image")))},
}
)
else:
element["image"] = encode_image_base64(element["image"])
if len(output_message_list) > 0 and message["role"] == output_message_list[-1]["role"]:
assert isinstance(message["content"], list), "Error: wrong content:" + str(message["content"])
if flatten_messages_as_text:
output_message_list[-1]["content"] += message["content"][0]["text"]
else:
output_message_list[-1]["content"] += message["content"]
else:
if flatten_messages_as_text:
content = message["content"][0]["text"]
else:
content = message["content"]
output_message_list.append({"role": message["role"], "content": content})
return output_message_list
class Model:
def __init__(self, **kwargs):
self.last_input_token_count = None
self.last_output_token_count = None
self.kwargs = kwargs
def _prepare_completion_kwargs(
self,
messages: List[Dict[str, str]],
stop_sequences: Optional[List[str]] = None,
grammar: Optional[str] = None,
tools_to_call_from: Optional[List[Tool]] = None,
custom_role_conversions: Optional[Dict[str, str]] = None,
convert_images_to_image_urls: bool = False,
flatten_messages_as_text: bool = False,
**kwargs,
) -> Dict:
"""
Prepare parameters required for model invocation, handling parameter priorities.
Parameter priority from high to low:
1. Explicitly passed kwargs
2. Specific parameters (stop_sequences, grammar, etc.)
3. Default values in self.kwargs
"""
# Clean and standardize the message list
messages = get_clean_message_list(
messages,
role_conversions=custom_role_conversions or tool_role_conversions,
convert_images_to_image_urls=convert_images_to_image_urls,
flatten_messages_as_text=flatten_messages_as_text,
)
# Use self.kwargs as the base configuration
completion_kwargs = {
**self.kwargs,
"messages": messages,
}
# Handle specific parameters
if stop_sequences is not None:
completion_kwargs["stop"] = stop_sequences
if grammar is not None:
completion_kwargs["grammar"] = grammar
# Handle tools parameter
if tools_to_call_from:
completion_kwargs.update(
{
"tools": [get_tool_json_schema(tool) for tool in tools_to_call_from],
"tool_choice": "required",
}
)
# Finally, use the passed-in kwargs to override all settings
completion_kwargs.update(kwargs)
return completion_kwargs
def get_token_counts(self) -> Dict[str, int]:
return {
"input_token_count": self.last_input_token_count,
"output_token_count": self.last_output_token_count,
}
def __call__(
self,
messages: List[Dict[str, str]],
stop_sequences: Optional[List[str]] = None,
grammar: Optional[str] = None,
tools_to_call_from: Optional[List[Tool]] = None,
**kwargs,
) -> ChatMessage:
"""Process the input messages and return the model's response.
Parameters:
messages (`List[Dict[str, str]]`):
A list of message dictionaries to be processed. Each dictionary should have the structure `{"role": "user/system", "content": "message content"}`.
stop_sequences (`List[str]`, *optional*):
A list of strings that will stop the generation if encountered in the model's output.
grammar (`str`, *optional*):
The grammar or formatting structure to use in the model's response.
tools_to_call_from (`List[Tool]`, *optional*):
A list of tools that the model can use to generate responses.
**kwargs:
Additional keyword arguments to be passed to the underlying model.
Returns:
`ChatMessage`: A chat message object containing the model's response.
"""
pass # To be implemented in child classes!
class HfApiModel(Model):
"""A class to interact with Hugging Face's Inference API for language model interaction.
This model allows you to communicate with Hugging Face's models using the Inference API. It can be used in both serverless mode or with a dedicated endpoint, supporting features like stop sequences and grammar customization.
Parameters:
model_id (`str`, *optional*, defaults to `"Qwen/Qwen2.5-Coder-32B-Instruct"`):
The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub.
provider (`str`, *optional*):
Name of the provider to use for inference. Can be `"replicate"`, `"together"`, `"fal-ai"`, `"sambanova"` or `"hf-inference"`.
defaults to hf-inference (HF Inference API).
token (`str`, *optional*):
Token used by the Hugging Face API for authentication. This token need to be authorized 'Make calls to the serverless Inference API'.
If the model is gated (like Llama-3 models), the token also needs 'Read access to contents of all public gated repos you can access'.
If not provided, the class will try to use environment variable 'HF_TOKEN', else use the token stored in the Hugging Face CLI configuration.
timeout (`int`, *optional*, defaults to 120):
Timeout for the API request, in seconds.
custom_role_conversions (`dict[str, str]`, *optional*):
Custom role conversion mapping to convert message roles in others.
Useful for specific models that do not support specific message roles like "system".
**kwargs:
Additional keyword arguments to pass to the Hugging Face API.
Raises:
ValueError:
If the model name is not provided.
Example:
```python
>>> engine = HfApiModel(
... model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
... token="your_hf_token_here",
... max_tokens=5000,
... )
>>> messages = [{"role": "user", "content": "Explain quantum mechanics in simple terms."}]
>>> response = engine(messages, stop_sequences=["END"])
>>> print(response)
"Quantum mechanics is the branch of physics that studies..."
```
"""
def __init__(
self,
model_id: str = "Qwen/Qwen2.5-Coder-32B-Instruct",
provider: Optional[str] = None,
token: Optional[str] = None,
timeout: Optional[int] = 120,
custom_role_conversions: Optional[Dict[str, str]] = None,
**kwargs,
):
super().__init__(**kwargs)
self.model_id = model_id
self.provider = provider
if token is None:
token = os.getenv("HF_TOKEN")
self.client = InferenceClient(self.model_id, provider=provider, token=token, timeout=timeout)
self.custom_role_conversions = custom_role_conversions
def __call__(
self,
messages: List[Dict[str, str]],
stop_sequences: Optional[List[str]] = None,
grammar: Optional[str] = None,
tools_to_call_from: Optional[List[Tool]] = None,
**kwargs,
) -> ChatMessage:
completion_kwargs = self._prepare_completion_kwargs(
messages=messages,
stop_sequences=stop_sequences,
grammar=grammar,
tools_to_call_from=tools_to_call_from,
convert_images_to_image_urls=True,
custom_role_conversions=self.custom_role_conversions,
**kwargs,
)
response = self.client.chat_completion(**completion_kwargs)
self.last_input_token_count = response.usage.prompt_tokens
self.last_output_token_count = response.usage.completion_tokens
message = ChatMessage.from_hf_api(response.choices[0].message, raw=response)
if tools_to_call_from is not None:
return parse_tool_args_if_needed(message)
return message
class TransformersModel(Model):
"""A class that uses Hugging Face's Transformers library for language model interaction.
This model allows you to load and use Hugging Face's models locally using the Transformers library. It supports features like stop sequences and grammar customization.
> [!TIP]
> You must have `transformers` and `torch` installed on your machine. Please run `pip install smolagents[transformers]` if it's not the case.
Parameters:
model_id (`str`, *optional*, defaults to `"Qwen/Qwen2.5-Coder-32B-Instruct"`):
The Hugging Face model ID to be used for inference. This can be a path or model identifier from the Hugging Face model hub.
device_map (`str`, *optional*):
The device_map to initialize your model with.
torch_dtype (`str`, *optional*):
The torch_dtype to initialize your model with.
trust_remote_code (bool, default `False`):
Some models on the Hub require running remote code: for this model, you would have to set this flag to True.
kwargs (dict, *optional*):
Any additional keyword arguments that you want to use in model.generate(), for instance `max_new_tokens` or `device`.
**kwargs:
Additional keyword arguments to pass to `model.generate()`, for instance `max_new_tokens` or `device`.
Raises:
ValueError:
If the model name is not provided.
Example:
```python
>>> engine = TransformersModel(
... model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
... device="cuda",
... max_new_tokens=5000,
... )
>>> messages = [{"role": "user", "content": "Explain quantum mechanics in simple terms."}]
>>> response = engine(messages, stop_sequences=["END"])
>>> print(response)
"Quantum mechanics is the branch of physics that studies..."
```
"""
def __init__(
self,
model_id: Optional[str] = None,
device_map: Optional[str] = None,
torch_dtype: Optional[str] = None,
trust_remote_code: bool = False,
**kwargs,
):
super().__init__(**kwargs)
if not is_torch_available() or not _is_package_available("transformers"):
raise ModuleNotFoundError(
"Please install 'transformers' extra to use 'TransformersModel': `pip install 'smolagents[transformers]'`"
)
import torch
from transformers import AutoModelForCausalLM, AutoModelForImageTextToText, AutoProcessor, AutoTokenizer
default_model_id = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
if model_id is None:
model_id = default_model_id
logger.warning(f"`model_id`not provided, using this default tokenizer for token counts: '{model_id}'")
self.model_id = model_id
self.kwargs = kwargs
if device_map is None:
device_map = "cuda" if torch.cuda.is_available() else "cpu"
logger.info(f"Using device: {device_map}")
self._is_vlm = False
try:
self.model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map=device_map,
torch_dtype=torch_dtype,
trust_remote_code=trust_remote_code,
)
self.tokenizer = AutoTokenizer.from_pretrained(model_id)
except ValueError as e:
if "Unrecognized configuration class" in str(e):
self.model = AutoModelForImageTextToText.from_pretrained(model_id, device_map=device_map)
self.processor = AutoProcessor.from_pretrained(model_id)
self._is_vlm = True
else:
raise e
except Exception as e:
logger.warning(
f"Failed to load tokenizer and model for {model_id=}: {e}. Loading default tokenizer and model instead from {default_model_id=}."
)
self.model_id = default_model_id
self.tokenizer = AutoTokenizer.from_pretrained(default_model_id)
self.model = AutoModelForCausalLM.from_pretrained(model_id, device_map=device_map, torch_dtype=torch_dtype)
def make_stopping_criteria(self, stop_sequences: List[str], tokenizer) -> "StoppingCriteriaList":
from transformers import StoppingCriteria, StoppingCriteriaList
class StopOnStrings(StoppingCriteria):
def __init__(self, stop_strings: List[str], tokenizer):
self.stop_strings = stop_strings
self.tokenizer = tokenizer
self.stream = ""
def reset(self):
self.stream = ""
def __call__(self, input_ids, scores, **kwargs):
generated = self.tokenizer.decode(input_ids[0][-1], skip_special_tokens=True)
self.stream += generated
if any([self.stream.endswith(stop_string) for stop_string in self.stop_strings]):
return True
return False
return StoppingCriteriaList([StopOnStrings(stop_sequences, tokenizer)])
def __call__(
self,
messages: List[Dict[str, str]],
stop_sequences: Optional[List[str]] = None,
grammar: Optional[str] = None,
tools_to_call_from: Optional[List[Tool]] = None,
images: Optional[List[Image.Image]] = None,
**kwargs,
) -> ChatMessage:
completion_kwargs = self._prepare_completion_kwargs(
messages=messages,
stop_sequences=stop_sequences,
grammar=grammar,
flatten_messages_as_text=(not self._is_vlm),
**kwargs,
)
messages = completion_kwargs.pop("messages")
stop_sequences = completion_kwargs.pop("stop", None)
max_new_tokens = (
kwargs.get("max_new_tokens")
or kwargs.get("max_tokens")
or self.kwargs.get("max_new_tokens")
or self.kwargs.get("max_tokens")
)
if max_new_tokens:
completion_kwargs["max_new_tokens"] = max_new_tokens
if hasattr(self, "processor"):
images = [Image.open(image) for image in images] if images else None
prompt_tensor = self.processor.apply_chat_template(
messages,
tools=[get_tool_json_schema(tool) for tool in tools_to_call_from] if tools_to_call_from else None,
return_tensors="pt",
tokenize=True,
return_dict=True,
images=images,
add_generation_prompt=True if tools_to_call_from else False,
)
else:
prompt_tensor = self.tokenizer.apply_chat_template(
messages,
tools=[get_tool_json_schema(tool) for tool in tools_to_call_from] if tools_to_call_from else None,
return_tensors="pt",
return_dict=True,
add_generation_prompt=True if tools_to_call_from else False,
)
prompt_tensor = prompt_tensor.to(self.model.device)
count_prompt_tokens = prompt_tensor["input_ids"].shape[1]
if stop_sequences:
stopping_criteria = self.make_stopping_criteria(
stop_sequences, tokenizer=self.processor if hasattr(self, "processor") else self.tokenizer
)
else:
stopping_criteria = None
out = self.model.generate(
**prompt_tensor,
stopping_criteria=stopping_criteria,
**completion_kwargs,
)
generated_tokens = out[0, count_prompt_tokens:]
if hasattr(self, "processor"):
output = self.processor.decode(generated_tokens, skip_special_tokens=True)
else:
output = self.tokenizer.decode(generated_tokens, skip_special_tokens=True)
self.last_input_token_count = count_prompt_tokens
self.last_output_token_count = len(generated_tokens)
if stop_sequences is not None:
output = remove_stop_sequences(output, stop_sequences)
if tools_to_call_from is None:
return ChatMessage(
role="assistant",
content=output,
raw={"out": out, "completion_kwargs": completion_kwargs},
)
else:
if "Action:" in output:
output = output.split("Action:", 1)[1].strip()
try:
start_index = output.index("{")
end_index = output.rindex("}")
output = output[start_index : end_index + 1]
except Exception as e:
raise Exception("No json blob found in output!") from e
try:
parsed_output = json.loads(output)
except json.JSONDecodeError as e:
raise ValueError(f"Tool call '{output}' has an invalid JSON structure: {e}")
tool_name = parsed_output.get("name")
tool_arguments = parsed_output.get("arguments")
return ChatMessage(
role="assistant",
content="",
tool_calls=[
ChatMessageToolCall(
id="".join(random.choices("0123456789", k=5)),
type="function",
function=ChatMessageToolCallDefinition(name=tool_name, arguments=tool_arguments),
)
],
raw={"out": out, "completion_kwargs": completion_kwargs},
)
class LiteLLMModel(Model):
"""This model connects to [LiteLLM](https://www.litellm.ai/) as a gateway to hundreds of LLMs.
Parameters:
model_id (`str`):
The model identifier to use on the server (e.g. "gpt-3.5-turbo").
api_base (`str`, *optional*):
The base URL of the OpenAI-compatible API server.
api_key (`str`, *optional*):
The API key to use for authentication.
custom_role_conversions (`dict[str, str]`, *optional*):
Custom role conversion mapping to convert message roles in others.
Useful for specific models that do not support specific message roles like "system".
**kwargs:
Additional keyword arguments to pass to the OpenAI API.
"""
def __init__(
self,
model_id: str = "anthropic/claude-3-5-sonnet-20240620",
api_base=None,
api_key=None,
custom_role_conversions: Optional[Dict[str, str]] = None,
**kwargs,
):
try:
import litellm
except ModuleNotFoundError:
raise ModuleNotFoundError(
"Please install 'litellm' extra to use LiteLLMModel: `pip install 'smolagents[litellm]'`"
)
super().__init__(**kwargs)
self.model_id = model_id
# IMPORTANT - Set this to TRUE to add the function to the prompt for Non OpenAI LLMs
litellm.add_function_to_prompt = True
self.api_base = api_base
self.api_key = api_key
self.custom_role_conversions = custom_role_conversions
def __call__(
self,
messages: List[Dict[str, str]],
stop_sequences: Optional[List[str]] = None,
grammar: Optional[str] = None,
tools_to_call_from: Optional[List[Tool]] = None,
**kwargs,
) -> ChatMessage:
import litellm
completion_kwargs = self._prepare_completion_kwargs(
messages=messages,
stop_sequences=stop_sequences,
grammar=grammar,
tools_to_call_from=tools_to_call_from,
model=self.model_id,
api_base=self.api_base,
api_key=self.api_key,
convert_images_to_image_urls=True,
flatten_messages_as_text=self.model_id.startswith("ollama"),
custom_role_conversions=self.custom_role_conversions,
**kwargs,
)
response = litellm.completion(**completion_kwargs)
self.last_input_token_count = response.usage.prompt_tokens
self.last_output_token_count = response.usage.completion_tokens
message = ChatMessage.from_dict(
response.choices[0].message.model_dump(include={"role", "content", "tool_calls"})
)
message.raw = response
if tools_to_call_from is not None:
return parse_tool_args_if_needed(message)
return message
class OpenAIServerModel(Model):
"""This model connects to an OpenAI-compatible API server.
Parameters:
model_id (`str`):
The model identifier to use on the server (e.g. "gpt-3.5-turbo").
api_base (`str`, *optional*):
The base URL of the OpenAI-compatible API server.
api_key (`str`, *optional*):
The API key to use for authentication.
organization (`str`, *optional*):
The organization to use for the API request.
project (`str`, *optional*):
The project to use for the API request.
custom_role_conversions (`dict[str, str]`, *optional*):
Custom role conversion mapping to convert message roles in others.
Useful for specific models that do not support specific message roles like "system".
**kwargs:
Additional keyword arguments to pass to the OpenAI API.
"""
def __init__(
self,
model_id: str,
api_base: Optional[str] = None,
api_key: Optional[str] = None,
organization: Optional[str] | None = None,
project: Optional[str] | None = None,
custom_role_conversions: Optional[Dict[str, str]] = None,
**kwargs,
):
try:
import openai
except ModuleNotFoundError:
raise ModuleNotFoundError(
"Please install 'openai' extra to use OpenAIServerModel: `pip install 'smolagents[openai]'`"
) from None
super().__init__(**kwargs)
self.model_id = model_id
self.client = openai.OpenAI(
base_url=api_base,
api_key=api_key,
organization=organization,
project=project,
)
self.custom_role_conversions = custom_role_conversions
def __call__(
self,
messages: List[Dict[str, str]],
stop_sequences: Optional[List[str]] = None,
grammar: Optional[str] = None,
tools_to_call_from: Optional[List[Tool]] = None,
**kwargs,
) -> ChatMessage:
completion_kwargs = self._prepare_completion_kwargs(
messages=messages,
stop_sequences=stop_sequences,
grammar=grammar,
tools_to_call_from=tools_to_call_from,
model=self.model_id,
custom_role_conversions=self.custom_role_conversions,
convert_images_to_image_urls=True,
**kwargs,
)
response = self.client.chat.completions.create(**completion_kwargs)
self.last_input_token_count = response.usage.prompt_tokens
self.last_output_token_count = response.usage.completion_tokens
message = ChatMessage.from_dict(
response.choices[0].message.model_dump(include={"role", "content", "tool_calls"})
)
message.raw = response
if tools_to_call_from is not None:
return parse_tool_args_if_needed(message)
return message
class AzureOpenAIServerModel(OpenAIServerModel):
"""This model connects to an Azure OpenAI deployment.
Parameters:
model_id (`str`):
The model deployment name to use when connecting (e.g. "gpt-4o-mini").
azure_endpoint (`str`, *optional*):
The Azure endpoint, including the resource, e.g. `https://example-resource.azure.openai.com/`. If not provided, it will be inferred from the `AZURE_OPENAI_ENDPOINT` environment variable.
api_key (`str`, *optional*):
The API key to use for authentication. If not provided, it will be inferred from the `AZURE_OPENAI_API_KEY` environment variable.
api_version (`str`, *optional*):
The API version to use. If not provided, it will be inferred from the `OPENAI_API_VERSION` environment variable.
custom_role_conversions (`dict[str, str]`, *optional*):
Custom role conversion mapping to convert message roles in others.
Useful for specific models that do not support specific message roles like "system".
**kwargs:
Additional keyword arguments to pass to the Azure OpenAI API.
"""
def __init__(
self,
model_id: str,
azure_endpoint: Optional[str] = None,
api_key: Optional[str] = None,
api_version: Optional[str] = None,
custom_role_conversions: Optional[Dict[str, str]] = None,
**kwargs,
):
# read the api key manually, to avoid super().__init__() trying to use the wrong api_key (OPENAI_API_KEY)
if api_key is None:
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
super().__init__(model_id=model_id, api_key=api_key, custom_role_conversions=custom_role_conversions, **kwargs)
# if we've reached this point, it means the openai package is available (checked in baseclass) so go ahead and import it
import openai
self.client = openai.AzureOpenAI(api_key=api_key, api_version=api_version, azure_endpoint=azure_endpoint)
__all__ = [
"MessageRole",
"tool_role_conversions",
"get_clean_message_list",
"Model",
"TransformersModel",
"HfApiModel",
"LiteLLMModel",
"OpenAIServerModel",
"AzureOpenAIServerModel",
"ChatMessage",
]
| smolagents/src/smolagents/models.py/0 | {
"file_path": "smolagents/src/smolagents/models.py",
"repo_id": "smolagents",
"token_count": 14533
} |
# coding=utf-8
# Copyright 2024 HuggingFace Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from typing import Optional, Tuple
from smolagents._function_type_hints_utils import get_json_schema
class AgentTextTests(unittest.TestCase):
def test_return_none(self):
def fn(x: int, y: Optional[Tuple[str, str, float]] = None) -> None:
"""
Test function
Args:
x: The first input
y: The second input
"""
pass
schema = get_json_schema(fn)
expected_schema = {
"name": "fn",
"description": "Test function",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "integer", "description": "The first input"},
"y": {
"type": "array",
"description": "The second input",
"nullable": True,
"prefixItems": [{"type": "string"}, {"type": "string"}, {"type": "number"}],
},
},
"required": ["x"],
},
"return": {"type": "null"},
}
self.assertEqual(
schema["function"]["parameters"]["properties"]["y"], expected_schema["parameters"]["properties"]["y"]
)
self.assertEqual(schema["function"], expected_schema)
| smolagents/tests/test_function_type_hints_utils.py/0 | {
"file_path": "smolagents/tests/test_function_type_hints_utils.py",
"repo_id": "smolagents",
"token_count": 890
} |
[workspace]
members = [
"benchmark",
"backends/v2",
"backends/v3",
"backends/grpc-metadata",
"backends/trtllm",
"launcher",
"router"
]
default-members = [
"benchmark",
"backends/v2",
"backends/v3",
"backends/grpc-metadata",
# "backends/trtllm",
"launcher",
"router"
]
resolver = "2"
[workspace.package]
version = "3.1.1-dev0"
edition = "2021"
authors = ["Olivier Dehaene"]
homepage = "https://github.com/huggingface/text-generation-inference"
[workspace.dependencies]
base64 = "0.22.0"
tokenizers = { version = "0.20.0", features = ["http"] }
hf-hub = { version = "0.3.1", features = ["tokio"] }
metrics = { version = "0.23.0" }
metrics-exporter-prometheus = { version = "0.15.1", features = [] }
minijinja = { version = "2.2.0", features = ["json"] }
minijinja-contrib = { version = "2.0.2", features = ["pycompat"] }
pyo3 = { version = "0.22.2", features = ["auto-initialize"] }
[profile.release]
incremental = true
[profile.release-binary]
inherits = "release"
debug = 1
incremental = true
panic = "abort"
[profile.release-opt]
inherits = "release"
debug = 0
incremental = false
lto = "fat"
opt-level = 3
codegen-units = 1
| text-generation-inference/Cargo.toml/0 | {
"file_path": "text-generation-inference/Cargo.toml",
"repo_id": "text-generation-inference",
"token_count": 500
} |
/// Single shard Client
use crate::v2::pb;
use crate::{ClientError, Result};
use crate::WARMUP_IMAGE_BASE64;
use grpc_metadata::InjectTelemetryContext;
use pb::generate::v2::text_generation_service_client::TextGenerationServiceClient;
use pb::generate::v2::*;
use std::cmp::min;
use std::time::Duration;
use tonic::transport::{Channel, Uri};
use tracing::instrument;
/// Text Generation Inference gRPC client
#[derive(Debug, Clone)]
pub struct Client {
stub: TextGenerationServiceClient<Channel>,
}
impl Client {
/// Returns a client connected to the given url
pub async fn connect(uri: Uri) -> Result<Self> {
let channel = Channel::builder(uri).connect().await?;
Ok(Self {
stub: TextGenerationServiceClient::new(channel),
})
}
/// Returns a client connected to the given unix socket
pub async fn connect_uds(path: String) -> Result<Self> {
let channel = Channel::from_shared("http://[::]:50051".to_string())
.unwrap()
.connect_with_connector(tower::service_fn(move |_: Uri| {
tokio::net::UnixStream::connect(path.clone())
}))
.await?;
Ok(Self {
stub: TextGenerationServiceClient::new(channel),
})
}
/// Returns a list of uris or unix sockets of all shards
#[instrument(skip(self))]
pub async fn service_discovery(&mut self) -> Result<Vec<String>> {
let request = tonic::Request::new(ServiceDiscoveryRequest {}).inject_context();
let response = self.stub.service_discovery(request).await.map_err(|_| {
ClientError::Connection("Server does not support v2 interface".to_string())
})?;
let urls = response
.into_inner()
.urls
.into_iter()
// Remove unix socket prefix
.map(|url| match url.strip_prefix("unix://") {
None => url,
Some(stripped_url) => stripped_url.to_string(),
})
.collect();
Ok(urls)
}
/// Get model info
#[instrument(skip(self))]
pub async fn info(&mut self) -> Result<InfoResponse> {
let request = tonic::Request::new(InfoRequest {}).inject_context();
let response = self.stub.info(request).await?.into_inner();
Ok(response)
}
/// Get model health
#[instrument(skip(self))]
pub async fn health(&mut self) -> Result<HealthResponse> {
let request = tonic::Request::new(HealthRequest {}).inject_context();
let response = self.stub.health(request).await?.into_inner();
Ok(response)
}
/// Clear the past generations cache
#[instrument(skip(self))]
pub async fn clear_cache(&mut self, batch_id: Option<u64>) -> Result<()> {
let request = tonic::Request::new(ClearCacheRequest { id: batch_id }).inject_context();
self.stub.clear_cache(request).await?;
Ok(())
}
/// Filter a cached batch
#[instrument(skip(self))]
pub async fn filter_batch(
&mut self,
batch_id: u64,
request_ids: Vec<u64>,
) -> Result<Option<CachedBatch>> {
let request = tonic::Request::new(FilterBatchRequest {
batch_id,
request_ids,
})
.inject_context();
let filtered_batch = self.stub.filter_batch(request).await?.into_inner();
Ok(filtered_batch.batch)
}
/// Warmup on a max size batch
///
/// Returns the maximum amount of tokens supported by the hardware
#[instrument(skip_all)]
pub async fn warmup(
&mut self,
max_input_length: u32,
max_prefill_tokens: u32,
max_total_tokens: u32,
max_batch_size: Option<usize>,
) -> Result<Option<u32>> {
let mut n_tokens = 0;
let mut requests = Vec::new();
// Create requests
while n_tokens < max_prefill_tokens {
let truncate = min(max_input_length, max_prefill_tokens - n_tokens);
let mut inputs = String::new();
inputs.push_str(&"_test ".to_string().repeat(max_input_length as usize));
if n_tokens == 0 {
// 1 request is enough to test vision heads.
// Sending images on other queries messes up easily with truncation.
inputs.push_str(&format!(
"",
));
}
requests.push(Request {
id: 0,
inputs,
// We truncate the input on the server side to be sure that it has the correct size
truncate,
// Set sampling parameters to also take these ops into account in the max memory
parameters: Some(NextTokenChooserParameters {
temperature: 0.9,
top_k: 10,
top_p: 0.9,
typical_p: 0.9,
do_sample: false,
seed: 0,
repetition_penalty: 1.2,
frequency_penalty: 0.1,
watermark: true,
grammar: String::new(),
grammar_type: GrammarType::None as i32,
}),
stopping_parameters: Some(StoppingCriteriaParameters {
max_new_tokens: max_total_tokens - truncate,
stop_sequences: vec![],
ignore_eos_token: true,
}),
prefill_logprobs: true,
top_n_tokens: 20,
});
n_tokens += max_input_length;
// Check max_batch_size
if Some(requests.len()) == max_batch_size {
break;
}
}
let batch = Batch {
id: 0,
size: requests.len() as u32,
requests,
max_tokens: 0,
};
let request = tonic::Request::new(WarmupRequest {
batch: Some(batch),
max_input_length,
max_prefill_tokens,
max_total_tokens,
})
.inject_context();
let response = self.stub.warmup(request).await?.into_inner();
Ok(response.max_supported_total_tokens)
}
/// Generate one token for each request in the given batch
///
/// Returns Generation for each request in batch
/// and the next cached batch
#[instrument(skip_all, fields(id = &batch.id, size = &batch.size))]
pub async fn prefill(
&mut self,
batch: Batch,
) -> Result<(Vec<Generation>, Option<CachedBatch>, PrefillTimings)> {
let request = tonic::Request::new(PrefillRequest { batch: Some(batch) }).inject_context();
let response = self.stub.prefill(request).await?.into_inner();
Ok((
response.generations,
response.batch,
PrefillTimings::new(response.forward_ns, response.decode_ns, response.total_ns),
))
}
/// Generate one token for each request in the given cached batches
///
/// Returns Generation for each request in batches
/// and the next cached batch
#[instrument(skip_all, fields(size = batches.iter().map(|batch|{batch.size}).sum::<u32>()))]
pub async fn decode(
&mut self,
batches: Vec<CachedBatch>,
) -> Result<(Vec<Generation>, Option<CachedBatch>, DecodeTimings)> {
let request = tonic::Request::new(DecodeRequest { batches }).inject_context();
let response = self.stub.decode(request).await?.into_inner();
Ok((
response.generations,
response.batch,
DecodeTimings::new(
response.concat_ns,
response.forward_ns,
response.decode_ns,
response.total_ns,
),
))
}
}
pub struct PrefillTimings {
pub forward: Duration,
pub decode: Duration,
pub total: Duration,
}
impl PrefillTimings {
fn new(forward_ns: u64, decode_ns: u64, total_ns: u64) -> Self {
Self {
forward: Duration::from_nanos(forward_ns),
decode: Duration::from_nanos(decode_ns),
total: Duration::from_nanos(total_ns),
}
}
}
pub struct DecodeTimings {
pub concat: Option<Duration>,
pub forward: Duration,
pub decode: Duration,
pub total: Duration,
}
impl DecodeTimings {
fn new(concat_ns: Option<u64>, forward_ns: u64, decode_ns: u64, total_ns: u64) -> Self {
Self {
concat: concat_ns.map(Duration::from_nanos),
forward: Duration::from_nanos(forward_ns),
decode: Duration::from_nanos(decode_ns),
total: Duration::from_nanos(total_ns),
}
}
}
| text-generation-inference/backends/client/src/v2/client.rs/0 | {
"file_path": "text-generation-inference/backends/client/src/v2/client.rs",
"repo_id": "text-generation-inference",
"token_count": 4125
} |
#include <ranges>
#include <nlohmann/json.hpp>
#include "backend.hpp"
#include "hardware.hpp"
namespace huggingface::tgi::backends::trtllm {
tle::ParallelConfig backend_workspace_t::parallel_config() const {
// Single engine (TP = PP = 1) -> using leader mode (no MPI involved)
const auto world_size = config_["/pretrained_config/mapping/world_size"_json_pointer].get<size_t>();
auto mode = tle::CommunicationMode::kLEADER;
std::optional<tle::OrchestratorConfig> orchestratorConfig = std::nullopt;
if (world_size > 1) {
SPDLOG_INFO("Detected sharded engine deployment, using orchestrator mode");
mode = tle::CommunicationMode::kORCHESTRATOR;
orchestratorConfig = std::make_optional<tle::OrchestratorConfig>(true, executor_worker_path_, nullptr,
true);
} else {
SPDLOG_INFO("Detected single engine deployment, using leader mode");
}
return tle::ParallelConfig(tle::CommunicationType::kMPI, mode, std::nullopt, std::nullopt, orchestratorConfig);
}
tle::ExecutorConfig backend_workspace_t::executor_config() const {
// Retrieve the compute capabilities to enable some options at runtime
const auto compute_capabilities = hardware::cuda::compute_capabilities_t();
// Allocate the config
tle::ExecutorConfig executor_config(/* maxBeamWidth = */ 1);
// Set the parallel config as inferred
executor_config.setParallelConfig(parallel_config());
// Define some configuration variables
executor_config.setKvCacheConfig(tle::KvCacheConfig(true));
executor_config.setEnableChunkedContext(compute_capabilities.is_at_least_ampere());
executor_config.setSchedulerConfig(tle::SchedulerConfig(tle::CapacitySchedulerPolicy::kMAX_UTILIZATION));
return executor_config;
}
backend_t::backend_t(std::filesystem::path &engines_folder, std::filesystem::path &executor_worker_path)
: workspace(engines_folder, executor_worker_path), executor_(executor_factory_initializer(workspace)) {}
size_t backend_t::num_tokens_ready() const noexcept {
return executor_.getNumResponsesReady();
}
std::expected<request_id_t, backend_error_t>
backend_t::submit(std::span<const token_id_t> token_ids, const generation_params_t g_params,
const sampling_params_t s_params) noexcept {
SPDLOG_DEBUG("Submit {:d} tokens for scheduling ({}, {})", token_ids.size(), g_params, s_params);
return executor_.enqueueRequest(tle::Request{
{token_ids.begin(), token_ids.end()}, // Making actual copy of the tokens
static_cast<tle::SizeType32>(g_params.max_new_tokens),
true,
(tle::SamplingConfig) s_params,
tle::OutputConfig{ /* returnLogProbs= */ true},
std::nullopt,
std::nullopt,
std::nullopt,
std::nullopt,
workspace.generation_config().stop_words
});
}
std::vector<tle::Response> backend_t::pull_tokens() noexcept {
SPDLOG_TRACE(FMT_STRING("Pulling out tokens ({:d} available)"), num_tokens_ready());
return executor_.awaitResponses();
}
void backend_t::cancel(request_id_t request_id) noexcept {
SPDLOG_TRACE(FMT_STRING("Cancelling request: {:d}"), request_id);
executor_.cancelRequest(request_id);
}
}
| text-generation-inference/backends/trtllm/csrc/backend.cpp/0 | {
"file_path": "text-generation-inference/backends/trtllm/csrc/backend.cpp",
"repo_id": "text-generation-inference",
"token_count": 1511
} |
use crate::block_allocator::{BlockAllocation, BlockAllocator};
use crate::client;
use crate::client::{
Batch, GrammarType, NextTokenChooserParameters, Request, StoppingCriteriaParameters,
};
use nohash_hasher::{BuildNoHashHasher, IntMap};
use std::cmp::max;
use std::collections::VecDeque;
use text_generation_router::infer::InferError;
use text_generation_router::infer::InferStreamResponse;
use text_generation_router::validation::{
Chunk, ChunksToString, ValidGenerateRequest, ValidGrammar, ValidParameters,
ValidStoppingParameters,
};
use tokio::sync::{mpsc, oneshot};
use tokio::time::Instant;
use tracing::{info_span, instrument, Instrument, Span};
/// Queue entry
#[derive(Debug)]
pub(crate) struct Entry {
/// Request
pub request: ValidGenerateRequest,
/// Response sender to communicate between the Infer struct and the batching_task
pub response_tx: mpsc::UnboundedSender<Result<InferStreamResponse, InferError>>,
/// Span that will live as long as entry
pub span: Span,
/// Temporary span used as a guard when logging inference, wait times...
pub temp_span: Option<Span>,
/// Instant when this entry was queued
pub queue_time: Instant,
/// Instant when this entry was added to a batch
pub batch_time: Option<Instant>,
/// Block Allocation
pub block_allocation: Option<BlockAllocation>,
}
/// Request Queue
#[derive(Debug, Clone)]
pub(crate) struct Queue {
/// Channel to communicate with the background queue task
queue_sender: mpsc::UnboundedSender<QueueCommand>,
}
impl Queue {
pub(crate) fn new(
requires_padding: bool,
block_size: u32,
prefix_caching: bool,
window_size: Option<u32>,
speculate: u32,
max_batch_total_tokens: u32,
support_chunking: bool,
) -> Self {
// Create channel
let (queue_sender, queue_receiver) = mpsc::unbounded_channel();
// Launch background queue task
tokio::spawn(queue_task(
requires_padding,
block_size,
prefix_caching,
window_size,
speculate,
max_batch_total_tokens,
support_chunking,
queue_receiver,
));
Self { queue_sender }
}
/// Append an entry to the queue
#[instrument(skip_all)]
pub(crate) fn append(&self, entry: Entry) {
// Send append command to the background task managing the state
// Unwrap is safe here
self.queue_sender
.send(QueueCommand::Append(Box::new(entry), Span::current()))
.unwrap();
}
// Get the next batch
#[instrument(skip(self))]
pub(crate) async fn next_batch(
&self,
min_size: Option<usize>,
max_size: Option<usize>,
prefill_token_budget: u32,
token_budget: u32,
) -> Option<NextBatch> {
if prefill_token_budget == 0 || token_budget == 0 {
return None;
};
// Create response channel
let (response_sender, response_receiver) = oneshot::channel();
// Send next batch command to the background task managing the state
// Unwrap is safe here
self.queue_sender
.send(QueueCommand::NextBatch {
min_size,
max_size,
prefill_token_budget,
token_budget,
response_sender,
span: Span::current(),
})
.unwrap();
// Await on response channel
// Unwrap is safe here
response_receiver.await.unwrap()
}
}
// Background task responsible of the queue state
#[allow(clippy::too_many_arguments)]
async fn queue_task(
requires_padding: bool,
block_size: u32,
prefix_caching: bool,
window_size: Option<u32>,
speculate: u32,
max_batch_total_tokens: u32,
support_chunking: bool,
mut receiver: mpsc::UnboundedReceiver<QueueCommand>,
) {
let mut state = State::new(
requires_padding,
block_size,
prefix_caching,
window_size,
speculate,
max_batch_total_tokens,
support_chunking,
);
while let Some(cmd) = receiver.recv().await {
match cmd {
QueueCommand::Append(entry, span) => {
span.in_scope(|| state.append(*entry));
metrics::gauge!("tgi_queue_size").increment(1.0);
}
QueueCommand::NextBatch {
min_size,
max_size,
prefill_token_budget,
token_budget,
response_sender,
span,
} => {
let next_batch = state
.next_batch(min_size, max_size, prefill_token_budget, token_budget)
.instrument(span)
.await;
response_sender.send(next_batch).unwrap();
metrics::gauge!("tgi_queue_size").set(state.entries.len() as f64);
}
}
}
}
/// Queue State
#[derive(Debug)]
struct State {
/// Queue entries organized in a Vec
entries: VecDeque<(u64, Entry)>,
/// Id of the next entry
next_id: u64,
/// Id of the next batch
next_batch_id: u64,
/// Paged Attention block size
block_size: u32,
/// Speculation amount
speculate: u32,
/// Whether the model allow the prefill chunking
/// If it does, the last request in the batch will be split to exactly match the prefill
/// token budget
support_chunking: bool,
/// Paged Attention Block Allocation
block_allocator: Option<BlockAllocator>,
}
impl State {
fn new(
requires_padding: bool,
block_size: u32,
prefix_caching: bool,
window_size: Option<u32>,
speculate: u32,
max_batch_total_tokens: u32,
support_chunking: bool,
) -> Self {
let block_allocator = (!requires_padding).then(|| {
BlockAllocator::new(
max_batch_total_tokens,
block_size,
prefix_caching,
window_size,
)
});
Self {
entries: VecDeque::with_capacity(128),
next_id: 0,
next_batch_id: 0,
block_size,
speculate,
support_chunking,
block_allocator,
}
}
/// Append an entry to the queue
fn append(&mut self, mut entry: Entry) {
// Create a span that will live as long as the entry is in the queue waiting to be batched
let queue_span = info_span!(parent: &entry.span, "queued");
entry.temp_span = Some(queue_span);
// Push entry in the queue
self.entries.push_back((self.next_id, entry));
self.next_id += 1;
}
// Get the next batch
async fn next_batch(
&mut self,
min_size: Option<usize>,
max_size: Option<usize>,
prefill_token_budget: u32,
token_budget: u32,
) -> Option<NextBatch> {
if self.entries.is_empty() {
tracing::debug!("No queue");
return None;
}
// Check if we have enough entries
if let Some(min_size) = min_size {
if self.entries.len() < min_size {
tracing::debug!("Not enough entries");
return None;
}
}
if let Some(max_size) = max_size {
if max_size == 0 {
tracing::debug!("No capacity");
return None;
}
}
// Pad prefill_token_budget to be a multiple of block size
let prefill_token_budget = prefill_token_budget.div_ceil(self.block_size) * self.block_size;
// Create span for this batch to add context to inference calls
let next_batch_span = info_span!(parent: None, "batch", batch_size = tracing::field::Empty);
next_batch_span.follows_from(Span::current());
let mut batch = Vec::with_capacity(self.entries.len());
let mut max_input_length = 0;
let mut prefill_tokens: u32 = 0;
let mut decode_tokens: u32 = 0;
let mut max_blocks = 0;
// Pop entries starting from the front of the queue
'entry_loop: while let Some((id, entry)) = self.entries.pop_front() {
// Filter entries where the response receiver was dropped (== entries where the request
// was dropped by the client)
if entry.response_tx.is_closed() {
metrics::counter!("tgi_request_failure", "err" => "dropped").increment(1);
tracing::debug!("Dropping entry");
continue;
}
let block_allocation = match &self.block_allocator {
None => {
// We pad to max input length in the Python shards
// We need to take these padding tokens into the equation
max_input_length = max_input_length.max(entry.request.input_length);
prefill_tokens = (batch.len() + 1) as u32 * max_input_length;
decode_tokens += entry.request.stopping_parameters.max_new_tokens;
let total_tokens = prefill_tokens + decode_tokens + self.speculate;
if prefill_tokens > prefill_token_budget || total_tokens > token_budget {
// Entry is over budget
// Add it back to the front
tracing::debug!("Over budget: prefill_tokens={prefill_tokens} > {prefill_token_budget} || {prefill_tokens} + {decode_tokens} + {} > {token_budget}", self.speculate);
self.entries.push_front((id, entry));
break 'entry_loop;
}
None
}
Some(block_allocator) => {
// If users wants the prefill logprobs, we cannot reuse the cache.
// So no input_ids for the radix tree.
let input_ids = if entry.request.decoder_input_details {
None
} else {
entry.request.input_ids.clone()
};
let tokens = entry.request.input_length
+ entry.request.stopping_parameters.max_new_tokens
+ self.speculate
- 1;
tracing::debug!("Allocating {tokens} with {input_ids:?}");
let block_allocation = match block_allocator.allocate(tokens, input_ids).await {
None => {
// Entry is over budget
// Add it back to the front
tracing::debug!("Over budget: not enough free blocks");
self.entries.push_front((id, entry));
break 'entry_loop;
}
Some(mut block_allocation) => {
tracing::debug!("Allocation: {block_allocation:?}");
max_blocks = max(max_blocks, block_allocation.blocks.len() as u32);
if block_allocation.prefix_len == entry.request.input_length {
// The whole request was found in the radix trie
// However, for the transformer forward to work, we need to
// have at least one token of postfix.
block_allocation.prefix_len -= 1;
}
block_allocation
}
};
let postfix_len = entry.request.input_length - block_allocation.prefix_len;
if prefill_tokens + postfix_len > prefill_token_budget {
// Entry is over budget
if self.support_chunking {
// We support chunking, just set postfix_len to exactly match prefill_token_budget
let chunk_len = prefill_token_budget.saturating_sub(prefill_tokens);
if chunk_len > 0 {
// Push this entry inside the batch
batch.push((id, entry, Some(block_allocation), Some(chunk_len)));
} else {
// We cannot prefill even one token for this entry
// Add it back to the queue
self.entries.push_front((id, entry));
}
tracing::debug!(
"Matched budget: prefill_tokens={} == {prefill_token_budget}",
prefill_tokens + postfix_len
);
break 'entry_loop;
} else {
// We don't support chunking, this entry needs to go back to the buffer
// Add it back to the front
tracing::debug!(
"Over budget: prefill_tokens={} > {prefill_token_budget}",
prefill_tokens + postfix_len
);
self.entries.push_front((id, entry));
break 'entry_loop;
}
}
prefill_tokens += postfix_len;
Some(block_allocation)
}
};
batch.push((id, entry, block_allocation, None));
if Some(batch.len()) == max_size {
break;
}
}
// Empty batch
if batch.is_empty() {
tracing::debug!("Filterered out all entries");
return None;
}
// XXX We haven't allocated yet, so we're allowed to ditch the results.
// Check if our batch is big enough
if let Some(min_size) = min_size {
// Batch is too small
if batch.len() < min_size {
// Add back entries to the queue in the correct order
for (id, entry, _, _) in batch.into_iter().rev() {
self.entries.push_front((id, entry));
}
return None;
}
}
let mut batch_requests = Vec::with_capacity(self.entries.len());
let mut batch_entries =
IntMap::with_capacity_and_hasher(self.entries.len(), BuildNoHashHasher::default());
for (id, mut entry, block_allocation, chunk_len) in batch {
// Create a new span to link the batch back to this entry
let entry_batch_span = info_span!(parent: &entry.span, "infer");
// Add relationships
next_batch_span.follows_from(&entry_batch_span);
entry_batch_span.follows_from(&next_batch_span);
// Update entry
entry.temp_span = Some(entry_batch_span);
let (blocks, slots, prefix_len) = match &block_allocation {
None => (Vec::new(), Vec::new(), 0),
Some(block_allocation) => (
block_allocation.blocks.clone(),
block_allocation.slots.clone(),
block_allocation.prefix_len,
),
};
entry.block_allocation = block_allocation;
batch_requests.push(Request {
id,
prefill_logprobs: entry.request.decoder_input_details,
input_chunks: Some(client::Input {
chunks: entry
.request
.inputs
.clone()
.into_iter()
.map(|c| client::InputChunk {
chunk: Some(match c {
Chunk::Text(text) => client::Chunk::Text(text),
Chunk::Image(image) => client::Chunk::Image(client::Image {
data: image.data,
mimetype: image.mimetype,
}),
}),
})
.collect(),
}),
inputs: entry.request.inputs.chunks_to_string(),
truncate: entry.request.truncate,
add_special_tokens: entry.request.add_special_tokens,
parameters: Some(NextTokenChooserParameters::from(
entry.request.parameters.clone(),
)),
stopping_parameters: Some(StoppingCriteriaParameters::from(
entry.request.stopping_parameters.clone(),
)),
top_n_tokens: entry.request.top_n_tokens,
blocks,
slots,
cache_len: prefix_len,
adapter_id: entry.request.adapter_id.clone(),
chunk_len,
});
// Set batch_time
entry.batch_time = Some(Instant::now());
// Insert in batch_entries IntMap
batch_entries.insert(id, entry);
}
// Final batch size
let size = batch_requests.len() as u32;
next_batch_span.record("batch_size", size);
let batch = Batch {
id: self.next_batch_id,
requests: batch_requests,
size,
max_tokens: (prefill_tokens + decode_tokens),
max_blocks,
};
// Increment batch id
self.next_batch_id += 1;
metrics::histogram!("tgi_batch_next_size").record(batch.size as f64);
Some((batch_entries, batch, next_batch_span))
}
}
type NextBatch = (IntMap<u64, Entry>, Batch, Span);
#[derive(Debug)]
enum QueueCommand {
Append(Box<Entry>, Span),
NextBatch {
min_size: Option<usize>,
max_size: Option<usize>,
prefill_token_budget: u32,
token_budget: u32,
response_sender: oneshot::Sender<Option<NextBatch>>,
span: Span,
},
}
impl From<ValidParameters> for NextTokenChooserParameters {
fn from(value: ValidParameters) -> Self {
let (grammar, grammar_type) = match value.grammar {
None => (String::new(), GrammarType::None),
Some(grammar) => match grammar {
ValidGrammar::Json(grammar_string) => (grammar_string, GrammarType::Json),
ValidGrammar::Regex(grammar_string) => (grammar_string, GrammarType::Regex),
},
};
Self {
temperature: value.temperature,
top_k: value.top_k,
top_p: value.top_p,
typical_p: value.typical_p,
do_sample: value.do_sample,
seed: value.seed,
repetition_penalty: value.repetition_penalty,
frequency_penalty: value.frequency_penalty,
watermark: value.watermark,
grammar,
grammar_type: grammar_type.into(),
}
}
}
impl From<ValidStoppingParameters> for StoppingCriteriaParameters {
fn from(value: ValidStoppingParameters) -> Self {
Self {
max_new_tokens: value.max_new_tokens,
stop_sequences: value.stop_sequences,
ignore_eos_token: value.ignore_eos_token,
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use super::*;
use tracing::info_span;
fn default_entry() -> (
Entry,
mpsc::UnboundedReceiver<Result<InferStreamResponse, InferError>>,
) {
let (response_tx, receiver_tx) = mpsc::unbounded_channel();
let entry = Entry {
request: ValidGenerateRequest {
inputs: vec![],
input_ids: Some(Arc::new(vec![])),
input_length: 1,
add_special_tokens: true,
truncate: 0,
decoder_input_details: false,
parameters: ValidParameters {
temperature: 0.0,
top_k: 0,
top_p: 0.0,
typical_p: 0.0,
do_sample: false,
seed: 0,
repetition_penalty: 0.0,
frequency_penalty: 0.0,
watermark: false,
grammar: None,
},
stopping_parameters: ValidStoppingParameters {
ignore_eos_token: false,
max_new_tokens: 1,
max_total_new_tokens: 1024,
stop_sequences: vec![],
},
top_n_tokens: 0,
adapter_id: None,
},
response_tx,
span: info_span!("entry"),
temp_span: None,
queue_time: Instant::now(),
batch_time: None,
block_allocation: None,
};
(entry, receiver_tx)
}
#[tokio::test]
async fn test_append() {
let mut state = State::new(false, 1, false, None, 0, 16, false);
let (entry, _guard) = default_entry();
assert_eq!(state.next_id, 0);
assert_eq!(state.entries.len(), 0);
state.append(entry);
assert_eq!(state.next_id, 1);
assert_eq!(state.entries.len(), 1);
let (id, _) = state.entries.remove(0).unwrap();
assert_eq!(id, 0);
}
#[tokio::test]
async fn test_next_batch_empty() {
let mut state = State::new(false, 1, false, None, 0, 16, false);
assert!(state.next_batch(None, None, 1, 1).await.is_none());
assert!(state.next_batch(Some(1), None, 1, 1).await.is_none());
}
#[tokio::test]
async fn test_next_batch_min_size() {
let mut state = State::new(false, 1, false, None, 0, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
state.append(entry1);
state.append(entry2);
let (entries, batch, _) = state.next_batch(None, None, 2, 2).await.unwrap();
assert_eq!(entries.len(), 2);
assert!(entries.contains_key(&0));
assert!(entries.contains_key(&1));
assert!(entries.get(&0).unwrap().batch_time.is_some());
assert!(entries.get(&1).unwrap().batch_time.is_some());
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 2);
assert_eq!(state.next_id, 2);
assert_eq!(state.entries.len(), 0);
assert_eq!(state.next_batch_id, 1);
let (entry3, _guard3) = default_entry();
state.append(entry3);
assert!(state.next_batch(Some(2), None, 2, 2).await.is_none());
assert_eq!(state.next_id, 3);
assert_eq!(state.entries.len(), 1);
let (id, _) = state.entries.remove(0).unwrap();
assert_eq!(id, 2);
}
#[tokio::test]
async fn test_next_batch_max_size() {
let mut state = State::new(false, 1, false, None, 0, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
state.append(entry1);
state.append(entry2);
let (entries, batch, _) = state.next_batch(None, Some(1), 2, 2).await.unwrap();
assert_eq!(entries.len(), 1);
assert!(entries.contains_key(&0));
assert!(entries.get(&0).unwrap().batch_time.is_some());
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 1);
assert_eq!(state.next_id, 2);
assert_eq!(state.entries.len(), 1);
assert_eq!(state.next_batch_id, 1);
}
#[tokio::test]
async fn test_next_batch_token_budget() {
let mut state = State::new(false, 1, false, None, 0, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
state.append(entry1);
state.append(entry2);
let (entries, batch, _) = state.next_batch(None, None, 1, 1).await.unwrap();
assert_eq!(entries.len(), 1);
assert!(entries.contains_key(&0));
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 1);
assert_eq!(state.next_id, 2);
assert_eq!(state.entries.len(), 1);
assert_eq!(state.next_batch_id, 1);
let (entry3, _guard3) = default_entry();
state.append(entry3);
let (entries, batch, _) = state.next_batch(None, None, 3, 3).await.unwrap();
assert_eq!(entries.len(), 2);
assert!(entries.contains_key(&1));
assert!(entries.contains_key(&2));
assert_eq!(batch.id, 1);
assert_eq!(batch.size, 2);
assert_eq!(state.next_id, 3);
assert_eq!(state.entries.len(), 0);
assert_eq!(state.next_batch_id, 2);
}
#[tokio::test]
async fn test_queue_append() {
let queue = Queue::new(false, 1, false, None, 0, 16, false);
let (entry, _guard) = default_entry();
queue.append(entry);
}
#[tokio::test]
async fn test_queue_next_batch_empty() {
let queue = Queue::new(false, 1, false, None, 0, 16, false);
assert!(queue.next_batch(None, None, 1, 1).await.is_none());
assert!(queue.next_batch(Some(1), None, 1, 1).await.is_none());
}
#[tokio::test]
async fn test_queue_next_batch_min_size() {
let queue = Queue::new(false, 1, false, None, 0, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
queue.append(entry1);
queue.append(entry2);
let (entries, batch, _) = queue.next_batch(None, None, 2, 2).await.unwrap();
assert_eq!(entries.len(), 2);
assert!(entries.contains_key(&0));
assert!(entries.contains_key(&1));
assert!(entries.get(&0).unwrap().batch_time.is_some());
assert!(entries.get(&1).unwrap().batch_time.is_some());
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 2);
let (entry3, _guard3) = default_entry();
queue.append(entry3);
// Not enough requests pending
assert!(queue.next_batch(Some(2), None, 2, 2).await.is_none());
// Not enough token budget
assert!(queue.next_batch(Some(1), None, 0, 0).await.is_none());
// Ok
let (entries2, batch2, _) = queue.next_batch(Some(1), None, 2, 2).await.unwrap();
assert_eq!(entries2.len(), 1);
assert!(entries2.contains_key(&2));
assert!(entries2.get(&2).unwrap().batch_time.is_some());
assert_eq!(batch2.id, 1);
assert_eq!(batch2.size, 1);
}
#[tokio::test]
async fn test_queue_next_batch_max_size() {
let queue = Queue::new(false, 1, false, None, 0, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
queue.append(entry1);
queue.append(entry2);
let (entries, batch, _) = queue.next_batch(None, Some(1), 2, 2).await.unwrap();
assert_eq!(entries.len(), 1);
assert!(entries.contains_key(&0));
assert!(entries.get(&0).unwrap().batch_time.is_some());
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 1);
}
#[tokio::test]
async fn test_queue_next_batch_token_budget() {
let queue = Queue::new(false, 1, false, None, 0, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
queue.append(entry1);
queue.append(entry2);
let (entries, batch, _) = queue.next_batch(None, None, 1, 1).await.unwrap();
assert_eq!(entries.len(), 1);
assert!(entries.contains_key(&0));
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 1);
let (entry3, _guard3) = default_entry();
queue.append(entry3);
let (entries, batch, _) = queue.next_batch(None, None, 3, 3).await.unwrap();
assert_eq!(entries.len(), 2);
assert!(entries.contains_key(&1));
assert!(entries.contains_key(&2));
assert_eq!(batch.id, 1);
assert_eq!(batch.size, 2);
}
#[tokio::test]
async fn test_queue_next_batch_token_speculate() {
let queue = Queue::new(true, 1, false, None, 2, 16, false);
let (entry1, _guard1) = default_entry();
let (entry2, _guard2) = default_entry();
queue.append(entry1);
queue.append(entry2);
// Budget of 1 is not enough
assert!(queue.next_batch(None, None, 1, 1).await.is_none());
let (entries, batch, _) = queue.next_batch(None, None, 6, 6).await.unwrap();
assert_eq!(entries.len(), 2);
assert!(entries.contains_key(&0));
assert!(entries.contains_key(&1));
assert_eq!(batch.id, 0);
assert_eq!(batch.size, 2);
}
#[tokio::test]
async fn test_queue_next_batch_dropped_receiver() {
let queue = Queue::new(false, 1, false, None, 0, 16, false);
let (entry, _) = default_entry();
queue.append(entry);
assert!(queue.next_batch(None, None, 1, 1).await.is_none());
}
}
| text-generation-inference/backends/v3/src/queue.rs/0 | {
"file_path": "text-generation-inference/backends/v3/src/queue.rs",
"repo_id": "text-generation-inference",
"token_count": 14884
} |
import pytest
from text_generation import __version__
from huggingface_hub.utils import build_hf_headers
@pytest.fixture
def flan_t5_xxl():
return "google/flan-t5-xxl"
@pytest.fixture
def llama_7b():
return "meta-llama/Llama-2-7b-chat-hf"
@pytest.fixture
def fake_model():
return "fake/model"
@pytest.fixture
def unsupported_model():
return "gpt2"
@pytest.fixture
def base_url():
return "https://api-inference.huggingface.co/models"
@pytest.fixture
def bloom_url(base_url, bloom_model):
return f"{base_url}/{bloom_model}"
@pytest.fixture
def flan_t5_xxl_url(base_url, flan_t5_xxl):
return f"{base_url}/{flan_t5_xxl}"
@pytest.fixture
def llama_7b_url(base_url, llama_7b):
return f"{base_url}/{llama_7b}"
@pytest.fixture
def fake_url(base_url, fake_model):
return f"{base_url}/{fake_model}"
@pytest.fixture
def unsupported_url(base_url, unsupported_model):
return f"{base_url}/{unsupported_model}"
@pytest.fixture(scope="session")
def hf_headers():
return build_hf_headers(
library_name="text-generation-tests", library_version=__version__
)
| text-generation-inference/clients/python/tests/conftest.py/0 | {
"file_path": "text-generation-inference/clients/python/tests/conftest.py",
"repo_id": "text-generation-inference",
"token_count": 479
} |
# TensorRT-LLM backend
The NVIDIA TensorRT-LLM (TRTLLM) backend is a high-performance backend for LLMs
that uses NVIDIA's TensorRT library for inference acceleration.
It makes use of specific optimizations for NVIDIA GPUs, such as custom kernels.
To use the TRTLLM backend **you need to compile** `engines` for the models you want to use.
Each `engine` must be compiled for a given set of:
- GPU architecture that you will use for inference (e.g. A100, L40, etc.)
- Maximum batch size
- Maximum input length
- Maximum output length
- Maximum beams width
## Supported models
Check the [support matrix](https://nvidia.github.io/TensorRT-LLM/reference/support-matrix.html) to see which models are
supported.
## Compiling engines
You can use [Optimum-NVIDIA](https://github.com/huggingface/optimum-nvidia) to compile engines for the models you
want to use.
```bash
MODEL_NAME="meta-llama/Llama-3.1-8B-Instruct"
DESTINATION="/tmp/engines/$MODEL_NAME"
HF_TOKEN="hf_xxx"
# Compile the engine using Optimum-NVIDIA
# This will create a compiled engine in the /tmp/engines/meta-llama/Llama-3.1-8B-Instruct
# directory for 1 GPU
docker run \
--rm \
-it \
--gpus=1 \
--shm-size=1g \
-v "$DESTINATION":/engine \
-e HF_TOKEN=$HF_TOKEN \
-e HF_HUB_ENABLE_HF_TRANSFER=1 \
huggingface/optimum-nvidia:v0.1.0b9-py310 \
bash -c "optimum-cli export trtllm \
--tp=1 \
--pp=1 \
--max-batch-size=64 \
--max-input-length 4096 \
--max-output-length 8192 \
--max-beams-width=1 \
--destination /tmp/engine \
$MODEL_NAME && cp -rL /tmp/engine/* /engine/"
```
Your compiled engine will be saved in the `/tmp/engines/$MODEL_NAME` directory, in a subfolder named after the GPU used to compile the model.
## Using the TRTLLM backend
Run TGI-TRTLLM Docker image with the compiled engine:
```bash
MODEL_NAME="meta-llama/Llama-3.1-8B-Instruct"
DESTINATION="/tmp/engines/$MODEL_NAME"
HF_TOKEN="hf_xxx"
docker run \
--gpus 1 \
--shm-size=1g \
-it \
--rm \
-p 3000:3000 \
-e MODEL=$MODEL_NAME \
-e PORT=3000 \
-e HF_TOKEN=$HF_TOKEN \
-v "$DESTINATION"/<YOUR_GPU_ARCHITECTURE>/engines:/data \
ghcr.io/huggingface/text-generation-inference:latest-trtllm \
--model-id /data/ \
--tokenizer-name $MODEL_NAME
```
## Development
To develop TRTLLM backend, you can use [dev containers](https://containers.dev/) with the following `.devcontainer.json` file:
```json
{
"name": "CUDA",
"build": {
"dockerfile": "Dockerfile_trtllm",
"context": ".."
},
"remoteEnv": {
"PATH": "${containerEnv:PATH}:/usr/local/cuda/bin",
"LD_LIBRARY_PATH": "$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64",
"XLA_FLAGS": "--xla_gpu_cuda_data_dir=/usr/local/cuda"
},
"customizations" : {
"jetbrains" : {
"backend" : "CLion"
}
}
}
```
and `Dockerfile_trtllm`:
```Dockerfile
ARG cuda_arch_list="75-real;80-real;86-real;89-real;90-real"
ARG build_type=release
ARG ompi_version=4.1.7
# CUDA dependent dependencies resolver stage
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu24.04 AS cuda-builder
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential \
cmake \
curl \
gcc-14 \
g++-14 \
git \
git-lfs \
lld \
libssl-dev \
libucx-dev \
libasan8 \
libubsan1 \
ninja-build \
pkg-config \
pipx \
python3 \
python3-dev \
python3-setuptools \
tar \
wget --no-install-recommends && \
pipx ensurepath
ENV TGI_INSTALL_PREFIX=/usr/local/tgi
ENV TENSORRT_INSTALL_PREFIX=/usr/local/tensorrt
# Install OpenMPI
FROM cuda-builder AS mpi-builder
WORKDIR /opt/src/mpi
ARG ompi_version
ENV OMPI_VERSION=${ompi_version}
ENV OMPI_TARBALL_FILENAME=openmpi-${OMPI_VERSION}.tar.bz2
ADD --checksum=sha256:54a33cb7ad81ff0976f15a6cc8003c3922f0f3d8ceed14e1813ef3603f22cd34 \
https://download.open-mpi.org/release/open-mpi/v4.1/${OMPI_TARBALL_FILENAME} .
RUN tar --strip-components=1 -xf ${OMPI_TARBALL_FILENAME} &&\
./configure --prefix=/usr/local/mpi --with-cuda=/usr/local/cuda --with-slurm && \
make -j all && \
make install && \
rm -rf ${OMPI_TARBALL_FILENAME}/..
# Install TensorRT
FROM cuda-builder AS trt-builder
COPY backends/trtllm/scripts/install_tensorrt.sh /opt/install_tensorrt.sh
RUN chmod +x /opt/install_tensorrt.sh && \
/opt/install_tensorrt.sh
# Build Backend
FROM cuda-builder AS tgi-builder
WORKDIR /usr/src/text-generation-inference
# Scoped global args reuse
ARG cuda_arch_list
ARG build_type
ARG sccache_gha_enabled
ARG actions_cache_url
ARG actions_runtime_token
# Install Rust
ENV PATH="/root/.cargo/bin:$PATH"
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | bash -s -- -y && \
chmod -R a+w /root/.rustup && \
chmod -R a+w /root/.cargo && \
cargo install sccache --locked
ENV LD_LIBRARY_PATH="/usr/local/mpi/lib:$LD_LIBRARY_PATH"
ENV PKG_CONFIG_PATH="/usr/local/mpi/lib/pkgconfig"
ENV CMAKE_PREFIX_PATH="/usr/local/mpi:/usr/local/tensorrt"
ENV USE_LLD_LINKER=ON
ENV CUDA_ARCH_LIST=${cuda_arch_list}
```
| text-generation-inference/docs/source/backends/trtllm.md/0 | {
"file_path": "text-generation-inference/docs/source/backends/trtllm.md",
"repo_id": "text-generation-inference",
"token_count": 2127
} |
# HTTP API Reference
#### Table of Contents
- [Text Generation Inference custom API](#text-generation-inference-custom-api)
- [OpenAI Messages API](#openai-messages-api)
- [Making a Request](#making-a-request)
- [Streaming](#streaming)
- [Synchronous](#synchronous)
- [Hugging Face Inference Endpoints](#hugging-face-inference-endpoints)
- [Cloud Providers](#cloud-providers)
- [Amazon SageMaker](#amazon-sagemaker)
The HTTP API is a RESTful API that allows you to interact with the text-generation-inference component. Two endpoints are available:
* Text Generation Inference [custom API](https://huggingface.github.io/text-generation-inference/)
* OpenAI's [Messages API](#openai-messages-api)
## Text Generation Inference custom API
Check the [API documentation](https://huggingface.github.io/text-generation-inference/) for more information on how to interact with the Text Generation Inference API.
## OpenAI Messages API
Text Generation Inference (TGI) now supports the Messages API, which is fully compatible with the OpenAI Chat Completion API. This feature is available starting from version 1.4.0. You can use OpenAI's client libraries or third-party libraries expecting OpenAI schema to interact with TGI's Messages API. Below are some examples of how to utilize this compatibility.
> **Note:** The Messages API is supported from TGI version 1.4.0 and above. Ensure you are using a compatible version to access this feature.
## Making a Request
You can make a request to TGI's Messages API using `curl`. Here's an example:
```bash
curl localhost:3000/v1/chat/completions \
-X POST \
-d '{
"model": "tgi",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is deep learning?"
}
],
"stream": true,
"max_tokens": 20
}' \
-H 'Content-Type: application/json'
```
## Streaming
You can also use OpenAI's Python client library to make a streaming request. Here's how:
```python
from openai import OpenAI
# init the client but point it to TGI
client = OpenAI(
base_url="http://localhost:3000/v1",
api_key="-"
)
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{"role": "system", "content": "You are a helpful assistant." },
{"role": "user", "content": "What is deep learning?"}
],
stream=True
)
# iterate and print stream
for message in chat_completion:
print(message)
```
## Synchronous
If you prefer to make a synchronous request, you can do so like this:
```python
from openai import OpenAI
# init the client but point it to TGI
client = OpenAI(
base_url="http://localhost:3000/v1",
api_key="-"
)
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{"role": "system", "content": "You are a helpful assistant." },
{"role": "user", "content": "What is deep learning?"}
],
stream=False
)
print(chat_completion)
```
## Hugging Face Inference Endpoints
The Messages API is integrated with [Inference Endpoints](https://huggingface.co/inference-endpoints/dedicated).
Every endpoint that uses "Text Generation Inference" with an LLM, which has a chat template can now be used. Below is an example of how to use IE with TGI using OpenAI's Python client library:
> **Note:** Make sure to replace `base_url` with your endpoint URL and to include `v1/` at the end of the URL. The `api_key` should be replaced with your Hugging Face API key.
```python
from openai import OpenAI
# init the client but point it to TGI
client = OpenAI(
# replace with your endpoint url, make sure to include "v1/" at the end
base_url="https://vlzz10eq3fol3429.us-east-1.aws.endpoints.huggingface.cloud/v1/",
# replace with your API key
api_key="hf_XXX"
)
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{"role": "system", "content": "You are a helpful assistant." },
{"role": "user", "content": "What is deep learning?"}
],
stream=True
)
# iterate and print stream
for message in chat_completion:
print(message.choices[0].delta.content, end="")
```
## Cloud Providers
TGI can be deployed on various cloud providers for scalable and robust text generation. One such provider is Amazon SageMaker, which has recently added support for TGI. Here's how you can deploy TGI on Amazon SageMaker:
## Amazon SageMaker
Amazon Sagemaker natively supports the message API:
```python
import json
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uri
try:
role = sagemaker.get_execution_role()
except ValueError:
iam = boto3.client('iam')
role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn']
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'HuggingFaceH4/zephyr-7b-beta',
'SM_NUM_GPUS': json.dumps(1),
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
image_uri=get_huggingface_llm_image_uri("huggingface",version="3.1.0"),
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g5.2xlarge",
container_startup_health_check_timeout=300,
)
# send request
predictor.predict({
"messages": [
{"role": "system", "content": "You are a helpful assistant." },
{"role": "user", "content": "What is deep learning?"}
]
})
```
| text-generation-inference/docs/source/reference/api_reference.md/0 | {
"file_path": "text-generation-inference/docs/source/reference/api_reference.md",
"repo_id": "text-generation-inference",
"token_count": 1840
} |
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"logprobs": null,
"message": {
"content": "As of your last question, the weather in Brooklyn, New York, is typically hot and humid throughout the year. The suburbs around New York City are jealously sheltered, and at least in the Lower Bronx, there are very few outdoor environments to appreciate nature.\n\nIn terms of temperature, the warmest times of the year are from June to August, when average high temperatures typically range from around 73°F or 23°C",
"name": null,
"role": "assistant",
"tool_calls": null
},
"usage": null
}
],
"created": 1724792495,
"id": "",
"model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"object": "chat.completion",
"system_fingerprint": "2.2.1-dev0-native",
"usage": {
"completion_tokens": 100,
"prompt_tokens": 61,
"total_tokens": 161
}
}
| text-generation-inference/integration-tests/models/__snapshots__/test_chat_llama/test_flash_llama_simple.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_chat_llama/test_flash_llama_simple.json",
"repo_id": "text-generation-inference",
"token_count": 364
} |
[
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 23090,
"logprob": -1.828125,
"special": false,
"text": " Hello"
},
{
"id": 23,
"logprob": -0.3178711,
"special": false,
"text": ","
},
{
"id": 8156,
"logprob": -0.23925781,
"special": false,
"text": " Daniel"
},
{
"id": 12,
"logprob": -0.5698242,
"special": false,
"text": "!"
},
{
"id": 193,
"logprob": -0.61279297,
"special": false,
"text": "\n"
},
{
"id": 23626,
"logprob": -0.4177246,
"special": false,
"text": "Daniel"
},
{
"id": 37,
"logprob": -0.0023345947,
"special": false,
"text": ":"
},
{
"id": 1634,
"logprob": -2.0605469,
"special": false,
"text": " What"
},
{
"id": 18,
"logprob": -1.5283203,
"special": false,
"text": "'"
},
{
"id": 94,
"logprob": -0.007965088,
"special": false,
"text": "s"
}
]
},
"generated_text": " Hello, Daniel!\nDaniel: What's"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 23090,
"logprob": -1.8251953,
"special": false,
"text": " Hello"
},
{
"id": 23,
"logprob": -0.31762695,
"special": false,
"text": ","
},
{
"id": 8156,
"logprob": -0.2388916,
"special": false,
"text": " Daniel"
},
{
"id": 12,
"logprob": -0.5698242,
"special": false,
"text": "!"
},
{
"id": 193,
"logprob": -0.6152344,
"special": false,
"text": "\n"
},
{
"id": 23626,
"logprob": -0.42211914,
"special": false,
"text": "Daniel"
},
{
"id": 37,
"logprob": -0.002336502,
"special": false,
"text": ":"
},
{
"id": 1634,
"logprob": -2.0605469,
"special": false,
"text": " What"
},
{
"id": 18,
"logprob": -1.5292969,
"special": false,
"text": "'"
},
{
"id": 94,
"logprob": -0.007926941,
"special": false,
"text": "s"
}
]
},
"generated_text": " Hello, Daniel!\nDaniel: What's"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 23090,
"logprob": -1.8251953,
"special": false,
"text": " Hello"
},
{
"id": 23,
"logprob": -0.31762695,
"special": false,
"text": ","
},
{
"id": 8156,
"logprob": -0.2388916,
"special": false,
"text": " Daniel"
},
{
"id": 12,
"logprob": -0.5698242,
"special": false,
"text": "!"
},
{
"id": 193,
"logprob": -0.6152344,
"special": false,
"text": "\n"
},
{
"id": 23626,
"logprob": -0.42211914,
"special": false,
"text": "Daniel"
},
{
"id": 37,
"logprob": -0.002336502,
"special": false,
"text": ":"
},
{
"id": 1634,
"logprob": -2.0605469,
"special": false,
"text": " What"
},
{
"id": 18,
"logprob": -1.5292969,
"special": false,
"text": "'"
},
{
"id": 94,
"logprob": -0.007926941,
"special": false,
"text": "s"
}
]
},
"generated_text": " Hello, Daniel!\nDaniel: What's"
},
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": null,
"tokens": [
{
"id": 23090,
"logprob": -1.8251953,
"special": false,
"text": " Hello"
},
{
"id": 23,
"logprob": -0.31762695,
"special": false,
"text": ","
},
{
"id": 8156,
"logprob": -0.2388916,
"special": false,
"text": " Daniel"
},
{
"id": 12,
"logprob": -0.5698242,
"special": false,
"text": "!"
},
{
"id": 193,
"logprob": -0.6152344,
"special": false,
"text": "\n"
},
{
"id": 23626,
"logprob": -0.42211914,
"special": false,
"text": "Daniel"
},
{
"id": 37,
"logprob": -0.002336502,
"special": false,
"text": ":"
},
{
"id": 1634,
"logprob": -2.0605469,
"special": false,
"text": " What"
},
{
"id": 18,
"logprob": -1.5292969,
"special": false,
"text": "'"
},
{
"id": 94,
"logprob": -0.007926941,
"special": false,
"text": "s"
}
]
},
"generated_text": " Hello, Daniel!\nDaniel: What's"
}
]
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_falcon/test_flash_falcon_load.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_falcon/test_flash_falcon_load.json",
"repo_id": "text-generation-inference",
"token_count": 3967
} |
{
"details": {
"best_of_sequences": null,
"finish_reason": "stop_sequence",
"generated_tokens": 5,
"prefill": [],
"seed": 0,
"tokens": [
{
"id": 5229,
"logprob": -2.5839844,
"special": false,
"text": " failed"
},
{
"id": 29901,
"logprob": -0.44970703,
"special": false,
"text": ":"
},
{
"id": 4829,
"logprob": -1.8339844,
"special": false,
"text": " Error"
},
{
"id": 297,
"logprob": -1.0556641,
"special": false,
"text": " in"
},
{
"id": 1243,
"logprob": 0.0,
"special": false,
"text": " test"
}
],
"top_tokens": null
},
"generated_text": "Test request failed: Error in test"
}
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama/test_flash_llama_all_params.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama/test_flash_llama_all_params.json",
"repo_id": "text-generation-inference",
"token_count": 487
} |
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 10,
"prefill": [],
"seed": 0,
"tokens": [
{
"id": 5229,
"logprob": -1.2607422,
"special": false,
"text": " failed"
},
{
"id": 29901,
"logprob": 0.0,
"special": false,
"text": ":"
},
{
"id": 6527,
"logprob": -0.11450195,
"special": false,
"text": " Could"
},
{
"id": 451,
"logprob": 0.0,
"special": false,
"text": " not"
},
{
"id": 4511,
"logprob": -0.2286377,
"special": false,
"text": " connect"
},
{
"id": 304,
"logprob": 0.0,
"special": false,
"text": " to"
},
{
"id": 1923,
"logprob": -1.2568359,
"special": false,
"text": " server"
},
{
"id": 13,
"logprob": 0.0,
"special": false,
"text": "\n"
},
{
"id": 13,
"logprob": -0.15905762,
"special": false,
"text": "\n"
},
{
"id": 29902,
"logprob": -0.21618652,
"special": false,
"text": "I"
}
],
"top_tokens": null
},
"generated_text": "Test request failed: Could not connect to server\n\nI"
}
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_marlin/test_flash_llama_marlin_all_params.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_marlin/test_flash_llama_marlin_all_params.json",
"repo_id": "text-generation-inference",
"token_count": 855
} |
{
"details": {
"best_of_sequences": null,
"finish_reason": "length",
"generated_tokens": 60,
"prefill": [],
"seed": 0,
"tokens": [
{
"id": 2284,
"logprob": -0.31323242,
"special": false,
"text": "():"
},
{
"id": 303,
"logprob": 0.0,
"special": false,
"text": "\n "
},
{
"id": 1489,
"logprob": 0.0,
"special": false,
"text": " print"
},
{
"id": 459,
"logprob": 0.0,
"special": false,
"text": "(\""
},
{
"id": 8302,
"logprob": -0.26611328,
"special": false,
"text": "Hello"
},
{
"id": 10914,
"logprob": -0.7871094,
"special": false,
"text": " World"
},
{
"id": 16013,
"logprob": -0.64746094,
"special": false,
"text": "!\")"
},
{
"id": 222,
"logprob": -0.054870605,
"special": false,
"text": "\n"
},
{
"id": 222,
"logprob": 0.0,
"special": false,
"text": "\n"
},
{
"id": 610,
"logprob": -0.41064453,
"special": false,
"text": "def"
},
{
"id": 1489,
"logprob": 0.0,
"special": false,
"text": " print"
},
{
"id": 100,
"logprob": 0.0,
"special": false,
"text": "_"
},
{
"id": 7670,
"logprob": 0.0,
"special": false,
"text": "hello"
},
{
"id": 100,
"logprob": 0.0,
"special": false,
"text": "_"
},
{
"id": 444,
"logprob": -0.21655273,
"special": false,
"text": "name"
},
{
"id": 45,
"logprob": 0.0,
"special": false,
"text": "("
},
{
"id": 444,
"logprob": 0.0,
"special": false,
"text": "name"
},
{
"id": 731,
"logprob": 0.0,
"special": false,
"text": "):"
},
{
"id": 303,
"logprob": 0.0,
"special": false,
"text": "\n "
},
{
"id": 1489,
"logprob": 0.0,
"special": false,
"text": " print"
},
{
"id": 459,
"logprob": 0.0,
"special": false,
"text": "(\""
},
{
"id": 8302,
"logprob": 0.0,
"special": false,
"text": "Hello"
},
{
"id": 332,
"logprob": -0.034698486,
"special": false,
"text": " \""
},
{
"id": 494,
"logprob": 0.0,
"special": false,
"text": " +"
},
{
"id": 655,
"logprob": 0.0,
"special": false,
"text": " name"
},
{
"id": 494,
"logprob": -0.20141602,
"special": false,
"text": " +"
},
{
"id": 332,
"logprob": 0.0,
"special": false,
"text": " \""
},
{
"id": 16013,
"logprob": 0.0,
"special": false,
"text": "!\")"
},
{
"id": 222,
"logprob": 0.0,
"special": false,
"text": "\n"
},
{
"id": 222,
"logprob": 0.0,
"special": false,
"text": "\n"
},
{
"id": 610,
"logprob": 0.0,
"special": false,
"text": "def"
},
{
"id": 1489,
"logprob": 0.0,
"special": false,
"text": " print"
},
{
"id": 100,
"logprob": 0.0,
"special": false,
"text": "_"
},
{
"id": 7670,
"logprob": 0.0,
"special": false,
"text": "hello"
},
{
"id": 100,
"logprob": 0.0,
"special": false,
"text": "_"
},
{
"id": 444,
"logprob": 0.0,
"special": false,
"text": "name"
},
{
"id": 100,
"logprob": 0.0,
"special": false,
"text": "_"
},
{
"id": 400,
"logprob": 0.0,
"special": false,
"text": "age"
},
{
"id": 45,
"logprob": 0.0,
"special": false,
"text": "("
},
{
"id": 444,
"logprob": 0.0,
"special": false,
"text": "name"
},
{
"id": 49,
"logprob": 0.0,
"special": false,
"text": ","
},
{
"id": 11505,
"logprob": 0.0,
"special": false,
"text": " age"
},
{
"id": 731,
"logprob": 0.0,
"special": false,
"text": "):"
},
{
"id": 303,
"logprob": 0.0,
"special": false,
"text": "\n "
},
{
"id": 1489,
"logprob": 0.0,
"special": false,
"text": " print"
},
{
"id": 459,
"logprob": 0.0,
"special": false,
"text": "(\""
},
{
"id": 8302,
"logprob": 0.0,
"special": false,
"text": "Hello"
},
{
"id": 332,
"logprob": 0.0,
"special": false,
"text": " \""
},
{
"id": 494,
"logprob": 0.0,
"special": false,
"text": " +"
},
{
"id": 655,
"logprob": 0.0,
"special": false,
"text": " name"
},
{
"id": 494,
"logprob": 0.0,
"special": false,
"text": " +"
},
{
"id": 3021,
"logprob": -0.5761719,
"special": false,
"text": " \","
},
{
"id": 863,
"logprob": 0.0,
"special": false,
"text": " you"
},
{
"id": 904,
"logprob": 0.0,
"special": false,
"text": " are"
},
{
"id": 332,
"logprob": 0.0,
"special": false,
"text": " \""
},
{
"id": 494,
"logprob": 0.0,
"special": false,
"text": " +"
},
{
"id": 615,
"logprob": 0.0,
"special": false,
"text": " str"
},
{
"id": 45,
"logprob": 0.0,
"special": false,
"text": "("
},
{
"id": 400,
"logprob": 0.0,
"special": false,
"text": "age"
},
{
"id": 46,
"logprob": 0.0,
"special": false,
"text": ")"
}
],
"top_tokens": null
},
"generated_text": "():\n print(\"Hello World!\")\n\ndef print_hello_name(name):\n print(\"Hello \" + name + \"!\")\n\ndef print_hello_name_age(name, age):\n print(\"Hello \" + name + \", you are \" + str(age)"
}
| text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2/test_flash_starcoder2_default_params.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_flash_starcoder2/test_flash_starcoder2_default_params.json",
"repo_id": "text-generation-inference",
"token_count": 4512
} |
{
"choices": [
{
"finish_reason": "eos_token",
"index": 0,
"logprobs": null,
"message": {
"content": null,
"name": null,
"role": "assistant",
"tool_calls": [
{
"function": {
"arguments": {
"format": "celsius",
"location": "New York, NY"
},
"description": null,
"name": "get_current_weather"
},
"id": 0,
"type": "function"
}
]
},
"usage": null
}
],
"created": 1712852394,
"id": "",
"model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"object": "text_completion",
"system_fingerprint": "2.0.1-native",
"usage": {
"completion_tokens": 48,
"prompt_tokens": 320,
"total_tokens": 368
}
}
| text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_grammar_tools_choice.json/0 | {
"file_path": "text-generation-inference/integration-tests/models/__snapshots__/test_tools_llama/test_flash_llama_grammar_tools_choice.json",
"repo_id": "text-generation-inference",
"token_count": 495
} |
import pytest
@pytest.fixture(scope="module")
def compressed_tensors_wna16_int_24_handle(launcher):
with launcher(
"danieldk/Llama-3.1-8B-w4a16-int-24",
num_shard=2,
quantize="compressed-tensors",
) as handle:
yield handle
@pytest.fixture(scope="module")
async def compressed_tensors_wna16_int_24(compressed_tensors_wna16_int_24_handle):
await compressed_tensors_wna16_int_24_handle.health(300)
return compressed_tensors_wna16_int_24_handle.client
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_wna16_int_24(
compressed_tensors_wna16_int_24, response_snapshot
):
response = await compressed_tensors_wna16_int_24.generate(
"What is deep learning?",
max_new_tokens=10,
decoder_input_details=True,
)
assert (
response.generated_text
== "Deep learning is a subset of machine learning that uses"
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_wna16_int_24_all_params(
compressed_tensors_wna16_int_24, response_snapshot
):
response = await compressed_tensors_wna16_int_24.generate(
"What is deep learning",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
stop_sequences=["test"],
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 10
assert (
response.generated_text
== "What is deep learning?\nDeep learning (DL) is a subset of"
)
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_compressed_tensors_wna16_int_24_load(
compressed_tensors_wna16_int_24, generate_load, response_snapshot
):
responses = await generate_load(
compressed_tensors_wna16_int_24,
"What is deep learning?",
max_new_tokens=10,
n=4,
)
assert (
responses[0].generated_text
== "Deep learning is a subset of machine learning that uses"
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot
| text-generation-inference/integration-tests/models/test_compressed_tensors_wna16_int_24.py/0 | {
"file_path": "text-generation-inference/integration-tests/models/test_compressed_tensors_wna16_int_24.py",
"repo_id": "text-generation-inference",
"token_count": 1080
} |
import pytest
@pytest.fixture(scope="module")
def flash_llama_marlin_handle(launcher):
with launcher(
"neuralmagic/llama-2-7b-chat-marlin", num_shard=2, quantize="marlin"
) as handle:
yield handle
@pytest.fixture(scope="module")
async def flash_llama_marlin(flash_llama_marlin_handle):
await flash_llama_marlin_handle.health(300)
return flash_llama_marlin_handle.client
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_flash_llama_marlin(flash_llama_marlin, response_snapshot):
response = await flash_llama_marlin.generate(
"Test request", max_new_tokens=10, decoder_input_details=True
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_flash_llama_marlin_all_params(flash_llama_marlin, response_snapshot):
response = await flash_llama_marlin.generate(
"Test request",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
@pytest.mark.private
async def test_flash_llama_marlin_load(
flash_llama_marlin, generate_load, response_snapshot
):
responses = await generate_load(
flash_llama_marlin, "Test request", max_new_tokens=10, n=4
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot
| text-generation-inference/integration-tests/models/test_flash_llama_marlin.py/0 | {
"file_path": "text-generation-inference/integration-tests/models/test_flash_llama_marlin.py",
"repo_id": "text-generation-inference",
"token_count": 748
} |
import pytest
@pytest.fixture(scope="module")
def flash_qwen2_vl_handle(launcher):
with launcher("Qwen/Qwen2-VL-7B-Instruct") as handle:
yield handle
@pytest.fixture(scope="module")
async def flash_qwen2(flash_qwen2_vl_handle):
await flash_qwen2_vl_handle.health(300)
return flash_qwen2_vl_handle.client
@pytest.mark.private
async def test_flash_qwen2_vl_simple(flash_qwen2, response_snapshot):
response = await flash_qwen2.chat(
seed=42,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
},
},
{"type": "text", "text": "Describe this image."},
],
},
],
)
assert (
response.choices[0].message.content
== "The image depicts an anthropomorphic rabbit, wearing a spacesuit, standing in a barren, rocky landscape that resembles the surface of another planet, possibly Mars. The rabbit has a red digestive system label on its chest, and the surrounding environment features red sandy terrain and a hazy, floating planet or moon in the background. The scene has a surreal, fantastical quality, blending elements of science fiction and space exploration with a whimsical character."
)
assert response == response_snapshot
@pytest.mark.private
async def test_flash_qwen2_vl_simple_streaming(flash_qwen2, response_snapshot):
responses = await flash_qwen2.chat(
seed=42,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
},
},
{"type": "text", "text": "Describe this image."},
],
},
],
stream=True,
)
count = 0
generated = ""
last_response = None
async for response in responses:
count += 1
generated += response.choices[0].delta.content
last_response = response
assert (
generated
== "The image depicts an anthropomorphic rabbit, wearing a spacesuit, standing in a barren, rocky landscape that resembles the surface of another planet, possibly Mars. The rabbit has a red digestive system label on its chest, and the surrounding environment features red sandy terrain and a hazy, floating planet or moon in the background. The scene has a surreal, fantastical quality, blending elements of science fiction and space exploration with a whimsical character."
)
assert count == 89
assert last_response == response_snapshot
@pytest.mark.private
async def test_flash_qwen2_vl_bay(flash_qwen2, response_snapshot):
response = await flash_qwen2.chat(
seed=42,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
},
},
{"type": "text", "text": "Describe the image"},
],
},
],
)
assert response == response_snapshot
@pytest.mark.private
async def test_flash_qwen2_vl_inpaint(flash_qwen2, response_snapshot):
response = await flash_qwen2.chat(
seed=42,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-inpaint.png"
},
},
{"type": "text", "text": "Describe the image"},
],
},
],
)
assert response == response_snapshot
| text-generation-inference/integration-tests/models/test_flash_qwen2_vl.py/0 | {
"file_path": "text-generation-inference/integration-tests/models/test_flash_qwen2_vl.py",
"repo_id": "text-generation-inference",
"token_count": 2158
} |
import pytest
@pytest.fixture(scope="module")
def mt0_base_handle(launcher):
with launcher("bigscience/mt0-base") as handle:
yield handle
@pytest.fixture(scope="module")
async def mt0_base(mt0_base_handle):
await mt0_base_handle.health(300)
return mt0_base_handle.client
@pytest.mark.release
@pytest.mark.asyncio
async def test_mt0_base(mt0_base, response_snapshot):
response = await mt0_base.generate(
"Why is the sky blue?",
max_new_tokens=10,
top_p=0.9,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 5
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
async def test_mt0_base_all_params(mt0_base, response_snapshot):
response = await mt0_base.generate(
"Why is the sky blue?",
max_new_tokens=10,
repetition_penalty=1.2,
return_full_text=True,
stop_sequences=["test"],
temperature=0.5,
top_p=0.9,
top_k=10,
truncate=5,
typical_p=0.9,
watermark=True,
decoder_input_details=True,
seed=0,
)
assert response.details.generated_tokens == 10
assert response == response_snapshot
@pytest.mark.release
@pytest.mark.asyncio
async def test_mt0_base_load(mt0_base, generate_load, response_snapshot):
responses = await generate_load(
mt0_base,
"Why is the sky blue?",
max_new_tokens=10,
n=4,
)
assert len(responses) == 4
assert all([r.generated_text == responses[0].generated_text for r in responses])
assert responses == response_snapshot
| text-generation-inference/integration-tests/models/test_mt0_base.py/0 | {
"file_path": "text-generation-inference/integration-tests/models/test_mt0_base.py",
"repo_id": "text-generation-inference",
"token_count": 737
} |
syntax = "proto3";
package generate.v2;
service TextGenerationService {
/// Model Info
rpc Info (InfoRequest) returns (InfoResponse) {}
/// Service discovery
rpc ServiceDiscovery (ServiceDiscoveryRequest) returns (ServiceDiscoveryResponse) {}
/// Empties batch cache
rpc ClearCache (ClearCacheRequest) returns (ClearCacheResponse);
/// Remove requests from a cached batch
rpc FilterBatch (FilterBatchRequest) returns (FilterBatchResponse);
/// Warmup the model and compute max cache size
rpc Warmup (WarmupRequest) returns (WarmupResponse);
/// Prefill batch and decode first token
rpc Prefill (PrefillRequest) returns (PrefillResponse);
/// Decode token for a list of prefilled batches
rpc Decode (DecodeRequest) returns (DecodeResponse);
/// Health check
rpc Health (HealthRequest) returns (HealthResponse);
}
message HealthRequest {}
message HealthResponse {}
/// Empty request
message InfoRequest {}
message InfoResponse {
bool requires_padding = 1;
string dtype = 2;
string device_type = 3;
optional uint32 window_size = 4;
uint32 speculate = 5;
}
/// Empty request
message ServiceDiscoveryRequest {}
message ServiceDiscoveryResponse {
/// Other shards urls
repeated string urls = 1;
}
message ClearCacheRequest {
/// Optional batch id
optional uint64 id = 1;
}
/// Empty response
message ClearCacheResponse {}
enum GrammarType {
GRAMMAR_TYPE_NONE = 0;
GRAMMAR_TYPE_JSON = 1;
GRAMMAR_TYPE_REGEX = 2;
}
message NextTokenChooserParameters {
/// exponential scaling output probability distribution
float temperature = 1;
/// restricting to the k highest probability elements
uint32 top_k = 2;
/// restricting to top tokens summing to prob_cut_off <= prob_cut_off
float top_p = 3;
/// restricting to top tokens summing to prob_cut_off <= prob_cut_off
float typical_p = 4;
/// apply sampling on the logits
bool do_sample = 5;
/// random seed for sampling
uint64 seed = 6;
/// repetition penalty
float repetition_penalty = 7;
/// frequency penalty
float frequency_penalty = 9;
/// token watermarking using "A Watermark for Large Language Models"
bool watermark = 8;
/// grammar (applied if not empty)
string grammar = 10;
/// grammar type
GrammarType grammar_type = 11;
}
message StoppingCriteriaParameters {
/// Maximum number of generated tokens
uint32 max_new_tokens = 1;
/// Optional stopping sequences
repeated string stop_sequences = 2;
/// Ignore end of sequence token
/// used for benchmarking
bool ignore_eos_token = 3;
}
message Request {
/// Request ID
uint64 id = 1;
/// The generation context
string inputs = 2;
/// Context truncation
uint32 truncate = 3;
/// Next Token Chooser Parameters
NextTokenChooserParameters parameters = 4;
/// Stopping Criteria Parameters
StoppingCriteriaParameters stopping_parameters = 5;
/// Return prefill logprobs
bool prefill_logprobs = 6;
/// Return most likely n tokens
uint32 top_n_tokens = 7;
}
message Batch {
/// Batch ID
uint64 id = 1;
/// Individual requests
repeated Request requests = 2;
/// Batch size (==len(requests))
uint32 size = 3;
/// Maximum number of tokens this batch will grow to
uint32 max_tokens = 4;
}
message CachedBatch {
/// Batch ID
uint64 id = 1;
/// Individual requests ids
repeated uint64 request_ids = 2;
/// Batch size (==len(requests))
uint32 size = 3;
/// Maximum number of tokens this batch will grow to
uint32 max_tokens = 4;
}
enum FinishReason {
FINISH_REASON_LENGTH = 0;
FINISH_REASON_EOS_TOKEN = 1;
FINISH_REASON_STOP_SEQUENCE = 2;
}
message GeneratedText {
/// Output
string text = 1;
/// Number of generated tokens
uint32 generated_tokens = 2;
/// Finish reason
FinishReason finish_reason = 3;
/// Seed
optional uint64 seed = 4;
}
message Tokens {
/// Token IDs
repeated uint32 ids = 1;
/// Logprobs
repeated float logprobs = 2;
/// tokens
repeated string texts = 3;
/// special
repeated bool is_special = 4;
}
message Generation {
/// Request ID
uint64 request_id = 1;
/// Prefill tokens (optional)
Tokens prefill_tokens = 2;
Tokens tokens = 3;
/// Complete generated text
optional GeneratedText generated_text = 4;
/// Top tokens
repeated Tokens top_tokens = 5;
}
message FilterBatchRequest {
/// Batch ID
uint64 batch_id = 1;
/// Requests to keep
repeated uint64 request_ids = 2;
}
message FilterBatchResponse {
/// Filtered Batch (cached)
CachedBatch batch = 1;
}
message PrefillRequest {
/// Batch
Batch batch = 1;
}
message PrefillResponse {
/// Generation
repeated Generation generations = 1;
/// Next batch (cached)
optional CachedBatch batch = 2;
/// Forward elapsed time in nanoseconds
uint64 forward_ns = 3;
/// Decode elapsed time in nanoseconds
uint64 decode_ns = 4;
/// Total elapsed time in nanoseconds
uint64 total_ns = 5;
}
message DecodeRequest {
/// Cached batches
repeated CachedBatch batches = 1;
}
message DecodeResponse {
/// Decodes
repeated Generation generations = 1;
/// Next batch (cached)
optional CachedBatch batch = 2;
/// Forward elapsed time in nanoseconds
uint64 forward_ns = 3;
/// Decode elapsed time in nanoseconds
uint64 decode_ns = 4;
/// Total elapsed time in nanoseconds
uint64 total_ns = 5;
/// Concatenate elapsed time in nanoseconds
optional uint64 concat_ns = 6;
}
message WarmupRequest {
/// Batch to warmup on
Batch batch = 1;
uint32 max_input_length = 2;
uint32 max_prefill_tokens = 3;
uint32 max_total_tokens = 4;
}
message WarmupResponse {
/// Maximum number of tokens supported by the model
optional uint32 max_supported_total_tokens = 1;
}
| text-generation-inference/proto/generate.proto/0 | {
"file_path": "text-generation-inference/proto/generate.proto",
"repo_id": "text-generation-inference",
"token_count": 2074
} |
use crate::infer::Infer;
use crate::server::{generate_internal, ComputeType};
use crate::{ChatRequest, ErrorResponse, GenerateParameters, GenerateRequest};
use axum::extract::Extension;
use axum::http::{HeaderMap, StatusCode};
use axum::response::{IntoResponse, Response};
use axum::Json;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use utoipa::ToSchema;
#[derive(Clone, Deserialize, ToSchema)]
#[cfg_attr(test, derive(Debug, PartialEq))]
pub(crate) struct GenerateVertexInstance {
#[schema(example = "What is Deep Learning?")]
pub inputs: String,
#[schema(nullable = true, default = "null", example = "null")]
pub parameters: Option<GenerateParameters>,
}
#[derive(Clone, Deserialize, ToSchema)]
#[cfg_attr(test, derive(Debug, PartialEq))]
#[serde(untagged)]
pub(crate) enum VertexInstance {
Generate(GenerateVertexInstance),
Chat(ChatRequest),
}
#[derive(Deserialize, ToSchema)]
#[cfg_attr(test, derive(Debug, PartialEq))]
pub(crate) struct VertexRequest {
#[serde(rename = "instances")]
pub instances: Vec<VertexInstance>,
}
#[derive(Clone, Deserialize, ToSchema, Serialize)]
pub(crate) struct VertexResponse {
pub predictions: Vec<String>,
}
/// Generate tokens from Vertex request
#[utoipa::path(
post,
tag = "Text Generation Inference",
path = "/vertex",
request_body = VertexRequest,
responses(
(status = 200, description = "Generated Text", body = VertexResponse),
(status = 424, description = "Generation Error", body = ErrorResponse,
example = json ! ({"error": "Request failed during generation"})),
(status = 429, description = "Model is overloaded", body = ErrorResponse,
example = json ! ({"error": "Model is overloaded"})),
(status = 422, description = "Input validation error", body = ErrorResponse,
example = json ! ({"error": "Input validation error"})),
(status = 500, description = "Incomplete generation", body = ErrorResponse,
example = json ! ({"error": "Incomplete generation"})),
)
)]
#[instrument(
skip_all,
fields(
total_time,
validation_time,
queue_time,
inference_time,
time_per_token,
seed,
)
)]
pub(crate) async fn vertex_compatibility(
Extension(infer): Extension<Infer>,
Extension(compute_type): Extension<ComputeType>,
Json(req): Json<VertexRequest>,
) -> Result<Response, (StatusCode, Json<ErrorResponse>)> {
let span = tracing::Span::current();
metrics::counter!("tgi_request_count").increment(1);
// check that theres at least one instance
if req.instances.is_empty() {
return Err((
StatusCode::UNPROCESSABLE_ENTITY,
Json(ErrorResponse {
error: "Input validation error".to_string(),
error_type: "Input validation error".to_string(),
}),
));
}
// Prepare futures for all instances
let mut futures = Vec::with_capacity(req.instances.len());
for instance in req.instances.into_iter() {
let generate_request = match instance {
VertexInstance::Generate(instance) => GenerateRequest {
inputs: instance.inputs.clone(),
add_special_tokens: true,
parameters: GenerateParameters {
do_sample: true,
max_new_tokens: instance.parameters.as_ref().and_then(|p| p.max_new_tokens),
seed: instance.parameters.as_ref().and_then(|p| p.seed),
details: true,
decoder_input_details: true,
..Default::default()
},
},
VertexInstance::Chat(instance) => {
let (generate_request, _using_tools): (GenerateRequest, bool) =
instance.try_into_generate(&infer)?;
generate_request
}
};
let infer_clone = infer.clone();
let compute_type_clone = compute_type.clone();
let span_clone = span.clone();
futures.push(async move {
generate_internal(
Extension(infer_clone),
compute_type_clone,
Json(generate_request),
span_clone,
)
.await
.map(|(_, _, Json(generation))| generation.generated_text)
.map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(ErrorResponse {
error: "Incomplete generation".into(),
error_type: "Incomplete generation".into(),
}),
)
})
});
}
// execute all futures in parallel, collect results, returning early if any error occurs
let results = futures::future::join_all(futures).await;
let predictions: Result<Vec<_>, _> = results.into_iter().collect();
let predictions = predictions?;
let response = VertexResponse { predictions };
Ok((HeaderMap::new(), Json(response)).into_response())
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{Message, MessageContent};
#[test]
fn vertex_deserialization() {
let string = serde_json::json!({
"instances": [
{
"messages": [{"role": "user", "content": "What's Deep Learning?"}],
"max_tokens": 128,
"top_p": 0.95,
"temperature": 0.7
}
]
});
let request: VertexRequest = serde_json::from_value(string).expect("Can deserialize");
assert_eq!(
request,
VertexRequest {
instances: vec![VertexInstance::Chat(ChatRequest {
messages: vec![Message {
role: "user".to_string(),
content: MessageContent::SingleText("What's Deep Learning?".to_string()),
name: None,
},],
max_tokens: Some(128),
top_p: Some(0.95),
temperature: Some(0.7),
..Default::default()
})]
}
);
}
}
| text-generation-inference/router/src/vertex.rs/0 | {
"file_path": "text-generation-inference/router/src/vertex.rs",
"repo_id": "text-generation-inference",
"token_count": 2789
} |
#include <ATen/Dispatch.h>
#include <THC/THCAtomics.cuh>
#include <ATen/ATen.h>
#include <torch/torch.h>
#include <vector>
#include <optional>
/**
* Friendly reminder of how multithreading works in CUDA: https://developer.nvidia.com/blog/even-easier-introduction-cuda
* Check example at https://github.com/thomasw21/LinearTransformers/blob/main/model/attention/fast_weight/fast_weight_cuda.cu
**/
// Available in pytorch main
//#define DISPATCH_CASE_FLOATING_TYPES(...) \
// at::AT_DISPATCH_CASE(at::ScalarType::Double, __VA_ARGS__) \
// at::AT_DISPATCH_CASE(at::ScalarType::Float, __VA_ARGS__) \
// at::AT_DISPATCH_CASE(at::ScalarType::Half, __VA_ARGS__) \
// at::AT_DISPATCH_CASE(at::ScalarType::BFloat16, __VA_ARGS__) \
/*
* Forward passes
*/
/**
* cast to fp32 if in fp16 + mask + softmax computation in fp32 + cast back to original dtype
**/
template<typename attention_scores_scalar, int64_t min_kv_length_shard_size_per_thread>
__global__ void forward_masked_softmax_kernel(
const torch::PackedTensorAccessor32<attention_scores_scalar, 2, torch::RestrictPtrTraits> attention_scores, // [B, KV]
const torch::PackedTensorAccessor32<bool, 2, torch::RestrictPtrTraits> mask, // [B, KV]
torch::PackedTensorAccessor32<attention_scores_scalar, 2, torch::RestrictPtrTraits> result, // [B, KV]
const int64_t effective_kv_length,
const dim3 blockDim,
const int64_t rows_per_block,
const int64_t kv_length,
const int64_t batch_size
) {
const auto row_id = threadIdx.x / effective_kv_length;
const auto effective_kv_length_id = threadIdx.x % effective_kv_length;
const auto kv_length_start = effective_kv_length_id * min_kv_length_shard_size_per_thread;
auto kv_length_end_ = (effective_kv_length_id + 1) * min_kv_length_shard_size_per_thread;
kv_length_end_ = (kv_length_end_ > kv_length) ? kv_length : kv_length_end_;
const auto kv_length_end = kv_length_end_;
const auto batch_id = blockIdx.x * rows_per_block + row_id;
// We need 2 float storage for each row, one for max computation, the other for normalizing exponential
extern __shared__ float temp_storage[];
const auto row_id_mem_offset = row_id * 2;
if (effective_kv_length_id == 0) {
temp_storage[row_id_mem_offset] = -std::numeric_limits<float>::infinity();
temp_storage[row_id_mem_offset + 1] = 0;
}
__syncthreads();
// Compute mask and max
if (batch_id < batch_size) {
float thread_max = -std::numeric_limits<float>::infinity();
for (int kv_length_id = kv_length_start; kv_length_id < kv_length_end; ++kv_length_id) {
if (mask[batch_id][kv_length_id] == 0) {
const float candidate = attention_scores[batch_id][kv_length_id];
thread_max = (thread_max < candidate) ? candidate : thread_max;
}
}
if (thread_max != -std::numeric_limits<float>::infinity()) {
// TODO @thomasw21 with more memory we can probably compute a much faster `max-reduce` in parallel O(ln(n)) operations in each memory slot
gpuAtomicMax(&temp_storage[row_id_mem_offset], thread_max);
}
}
__syncthreads();
// Compute exp(elt - max) masked
float exponential[min_kv_length_shard_size_per_thread];
if (batch_id < batch_size) {
float thread_add = 0;
for (int kv_length_id = kv_length_start; kv_length_id < kv_length_end; ++kv_length_id) {
if (mask[batch_id][kv_length_id] == 0) {
exponential[kv_length_id - kv_length_start] = std::exp(static_cast<float>(attention_scores[batch_id][kv_length_id]) - temp_storage[row_id_mem_offset]);
thread_add = thread_add + exponential[kv_length_id - kv_length_start];
} else {
exponential[kv_length_id - kv_length_start] = 0.;
}
}
if (thread_add > 0) {
// TODO @thomasw21 with more memory we can probably compute a much faster `sum-reduce` in parallel O(ln(n)) operations in each memory slot
gpuAtomicAdd(&temp_storage[row_id_mem_offset + 1], thread_add);
}
}
__syncthreads();
// Compute softmax
if (batch_id < batch_size) {
// If sum of all exponential is 0, we set the softmax values to 0
if (temp_storage[row_id_mem_offset + 1] == 0.) {
for (int kv_length_id = kv_length_start; kv_length_id < kv_length_end; ++kv_length_id) {
result[batch_id][kv_length_id] = 0.;
}
} else {
for (int kv_length_id = kv_length_start; kv_length_id < kv_length_end; ++kv_length_id) {
result[batch_id][kv_length_id] = static_cast<attention_scores_scalar>(exponential[kv_length_id - kv_length_start] / temp_storage[row_id_mem_offset + 1]);
}
}
}
}
#define CHECK_CUDA(x) TORCH_CHECK(x.device().is_cuda(), #x " must be a CUDA tensor")
#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
std::tuple<at::Tensor, std::optional<std::vector<at::Tensor>>, at::Tensor> forward(
const at::Tensor query,
const at::Tensor key,
const at::Tensor value,
const std::optional<std::vector<at::Tensor>> layer_past,
const at::Tensor attention_mask,
const std::optional<at::Tensor> head_mask,
const float inv_norm_factor,
const int num_heads,
const bool use_cache
) {
auto query_layer = query;
auto key_layer = key;
auto value_layer = value;
if (layer_past) {
const auto past_key = (*layer_past).at(0);
const auto past_value = (*layer_past).at(1);
key_layer = at::cat({past_key, key_layer}, 2);
value_layer = at::cat({past_value, value_layer}, 2);
}
std::optional<std::vector<at::Tensor>> present;
if (use_cache) {
present = {key_layer, value_layer};
} else {
present = {};
}
const auto batch_size = query_layer.size(0);
const auto q_length = query_layer.size(2);
const auto attn_head_size = query_layer.size(3);
const auto batch_size_times_num_heads = batch_size * num_heads;
const auto kv_length = key_layer.size(2);
const auto query_view = query_layer.reshape({batch_size_times_num_heads, q_length, attn_head_size});
auto key_view = key_layer.reshape({batch_size_times_num_heads, kv_length, attn_head_size}).transpose(1, 2);
auto value_view = value_layer.reshape({batch_size_times_num_heads, kv_length, attn_head_size});
auto query_scaled = query_view * inv_norm_factor;
auto attention_scores = at::bmm(query_scaled, key_view);
// Computing `optionally_cast_fp16_to_fp32 + masked_fill + softmax + cast_to_intial_dtype`
at::Tensor attention_probs;
if (true) {
// TODO @thomasw21: it's easier to think of attention_scores as 2D tensors
const auto attention_scores_2d = attention_scores.view({batch_size_times_num_heads * q_length, kv_length});
const auto attention_mask_2d = attention_mask.view({batch_size_times_num_heads * q_length, kv_length});
// Custom kernel
attention_probs = at::empty_like(attention_scores_2d);
// Check that inputs and contiguous + cuda tensors
CHECK_INPUT(attention_scores_2d);
CHECK_INPUT(attention_mask_2d);
// TODO @thomas21: change by to this as it's cleaner when pytorch 1.13 comes out
// DISPATCH_CASE_FLOATING_TYPES(attention_scores.scalar_type(), "masked_softmax", [&] {
AT_DISPATCH_FLOATING_TYPES_AND2(at::ScalarType::Half, at::ScalarType::BFloat16, attention_scores.scalar_type(), "masked_softmax", [&] {
/*
* Understanding how GPUs work: https://developer.nvidia.com/blog/cuda-refresher-cuda-programming-model/
* A100 specifications: https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf
* - SMs: 108
* - TPCs: 56 (What's that?)
* - Memory size: 40 GB
* - L2 Cache size: 40960 KB (shared across all SMs)
* - L1/Shared memory size: 192 KB (shared across all threads within a SM)
* - Max Threads / SM: 2048
* - Max Thread Blocks / SM: 32
*/
/*
* We should split [batch_size_times_num_heads_block, q_length] in seperate blocks and [batch_size_times_num_heads_block_size, kv_length] a single block
* with multiple threads as we need to `sync_threads` to run exponential sum.
* We maximise the usage of threads within a single block
*/
// TODO @thomasw21 figure out everything warp related:
// - why do they have to be power of 2
// TODO @thomas21 check why everyone is setting 1024 when officially it's 2048
const auto MAX_THREADS_PER_SM = 1024;
// TODO @thomasw21 figure out how to have longer sequences, currently the maximum is `max_kv_length = MAX_THREADS_PER_SM * MIN_KV_LENGTH_SHARD_SIZE_PER_THREAD`
const auto MIN_KV_LENGTH_SHARD_SIZE_PER_THREAD = 4;
// `effective_kv_length = ceil(kv_length / MIN_KV_LENGTH_SHARD_SIZE_PER_THREAD)`
const auto effective_kv_length = (kv_length - 1)/ MIN_KV_LENGTH_SHARD_SIZE_PER_THREAD + 1;
const auto rows_per_block = MAX_THREADS_PER_SM / effective_kv_length;
const auto num_blocks = (batch_size_times_num_heads * q_length - 1) / rows_per_block + 1;
const dim3 gridDim(num_blocks); // Number of blocks that run
const dim3 blockDim(MAX_THREADS_PER_SM); // Number of threads that run per block
const int shared_mem_forward = rows_per_block * 2 * sizeof(float);
// 192 * 2 ** 10
// const auto MAX_L1_MEMORY = 196608;
// const auto MAX_SMs = 108;
// TORCH_CHECK(batch_size_times_num_heads * q_length <= MAX_L1_MEMORY, "Shared memory exceeds 192KB limitation.");
// TORCH_CHECK(gridDim.x * gridDim.y * gridDim.z <= MAX_SMs, "A100s only have 108 SMs. Raising as require blocks is bigger.");
// TORCH_CHECK(blockDim.x * blockDim.y * blockDim.z <= MAX_THREADS_PER_SM, "A100s only have 2048 threads per block. Raising as require requested threads is higher.");
forward_masked_softmax_kernel<scalar_t, MIN_KV_LENGTH_SHARD_SIZE_PER_THREAD><<<gridDim, blockDim, shared_mem_forward>>>(
attention_scores_2d.packed_accessor32<scalar_t, 2, torch::RestrictPtrTraits>(),
attention_mask_2d.packed_accessor32<bool, 2, torch::RestrictPtrTraits>(),
attention_probs.packed_accessor32<scalar_t, 2, torch::RestrictPtrTraits>(),
effective_kv_length,
blockDim,
rows_per_block,
kv_length,
batch_size_times_num_heads * q_length
);
});
attention_probs = attention_probs.view({batch_size_times_num_heads, q_length, kv_length});
} else {
// Pytorch C++ API
auto input_dtype = attention_scores.scalar_type();
if (input_dtype == at::ScalarType::Float) {
attention_scores = attention_scores.to(at::ScalarType::Float);
};
// TODO @thomasw21 Figure out how to get minimum value
auto attn_weights = attention_scores.masked_fill_(attention_mask, -1e34);
attention_probs = attn_weights.softmax(-1, at::ScalarType::Float).to(input_dtype);
}
auto context_layer = attention_probs.bmm(value_view);
// `_merge_heads`
context_layer = context_layer.view({batch_size, num_heads, q_length, attn_head_size});
context_layer = context_layer.permute({0, 2, 1, 3});
context_layer = context_layer.reshape({batch_size, q_length, attn_head_size * num_heads});
return std::make_tuple(context_layer, present, attention_probs);
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def(
"forward",
&forward,
"GPT-Neox attention mechanism forward (CUDA)"
);
}
| text-generation-inference/server/custom_kernels/custom_kernels/fused_attention_cuda.cu/0 | {
"file_path": "text-generation-inference/server/custom_kernels/custom_kernels/fused_attention_cuda.cu",
"repo_id": "text-generation-inference",
"token_count": 5265
} |
// Adapted from turboderp exllama: https://github.com/turboderp/exllama
#ifndef _util_cuh
#define _util_cuh
#include <cuda_runtime.h>
#include <cuda_fp16.h>
#include <cstdint>
#include <cstdio>
#if defined(USE_ROCM)
#define cudaUnspecified hipErrorUnknown
#else
#define cudaUnspecified cudaErrorApiFailureBase
#endif
// React to failure on return code != cudaSuccess
#define _cuda_check(fn) \
do { \
{_cuda_err = fn;} \
if (_cuda_err != cudaSuccess) goto _cuda_fail; \
} while(false)
// React to failure on return code == 0
#define _alloc_check(fn) \
do { \
if (!(fn)) { _cuda_err = cudaUnspecified; goto _cuda_fail; } \
else _cuda_err = cudaSuccess; \
} while(false)
#endif
| text-generation-inference/server/exllama_kernels/exllama_kernels/util.cuh/0 | {
"file_path": "text-generation-inference/server/exllama_kernels/exllama_kernels/util.cuh",
"repo_id": "text-generation-inference",
"token_count": 283
} |
#ifndef _qdq_6_cuh
#define _qdq_6_cuh
#include "qdq_util.cuh"
#include "../../config.h"
#if QMODE_6BIT == 1
// Not implemented
#else
__forceinline__ __device__ void shuffle_6bit_16
(
uint32_t* q,
int stride
)
{
}
__forceinline__ __device__ void dequant_6bit_16
(
const uint32_t q_0,
const uint32_t q_1,
const uint32_t q_2,
half2 (&dq)[8],
int stride
)
{
half dqh[16];
for (int i = 0; i < 5; i++) dqh[ i] = dq_ns(exb( q_0, i * 6 , 0x3f), 32);
dqh[ 5 ] = dq_ns(exb(q_1, q_0, 30, 0x3f), 32);
for (int i = 0; i < 4; i++) dqh[ 6 + i] = dq_ns(exb( q_1, i * 6 + 4, 0x3f), 32);
dqh[10 ] = dq_ns(exb(q_2, q_1, 28, 0x3f), 32);
for (int i = 0; i < 5; i++) dqh[11 + i] = dq_ns(exb( q_2, i * 6 + 2, 0x3f), 32);
for (int i = 0; i < 8; i++) dq[i] = __halves2half2(dqh[i * 2], dqh[i * 2 + 1]);
}
#endif
#endif
| text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_6.cuh/0 | {
"file_path": "text-generation-inference/server/exllamav2_kernels/exllamav2_kernels/cuda/quant/qdq_6.cuh",
"repo_id": "text-generation-inference",
"token_count": 571
} |
import pytest
from text_generation_server.pb import generate_pb2
from text_generation_server.models.causal_lm import CausalLMBatch, CausalLM
@pytest.fixture(scope="session")
def default_santacoder():
return CausalLM.fallback(model_id="bigcode/santacoder")
@pytest.fixture
def default_pb_request(default_pb_parameters, default_pb_stop_parameters):
return generate_pb2.Request(
id=0,
inputs="def",
input_chunks=generate_pb2.Input(chunks=[generate_pb2.InputChunk(text="def")]),
prefill_logprobs=True,
truncate=100,
parameters=default_pb_parameters,
stopping_parameters=default_pb_stop_parameters,
)
@pytest.fixture
def default_pb_batch(default_pb_request):
return generate_pb2.Batch(id=0, requests=[default_pb_request], size=1)
@pytest.fixture
def default_fim_pb_request(default_pb_parameters, default_pb_stop_parameters):
return generate_pb2.Request(
id=0,
inputs="<fim-prefix>def<fim-suffix>world<fim-middle>",
input_chunks=generate_pb2.Input(
chunks=[
generate_pb2.InputChunk(
text="<fim-prefix>def<fim-suffix>world<fim-middle>"
)
]
),
prefill_logprobs=True,
truncate=100,
parameters=default_pb_parameters,
stopping_parameters=default_pb_stop_parameters,
)
@pytest.fixture
def default_fim_pb_batch(default_fim_pb_request):
return generate_pb2.Batch(id=0, requests=[default_fim_pb_request], size=1)
@pytest.mark.skip
def test_santacoder_generate_token_completion(default_santacoder, default_pb_batch):
batch = CausalLMBatch.from_pb(
default_pb_batch,
default_santacoder.tokenizer,
default_santacoder.dtype,
default_santacoder.device,
)
next_batch = batch
for _ in range(batch.stopping_criterias[0].max_new_tokens - 1):
generations, next_batch, _ = default_santacoder.generate_token(next_batch)
assert len(generations) == len(next_batch)
generations, next_batch, _ = default_santacoder.generate_token(next_batch)
assert next_batch is None
assert len(generations) == 1
assert generations[0].generated_text.text == " test_get_all_users_with_"
assert generations[0].request_id == batch.requests[0].id
assert (
generations[0].generated_text.generated_tokens
== batch.stopping_criterias[0].max_new_tokens
)
@pytest.mark.skip
def test_fim_santacoder_generate_token_completion(
default_santacoder, default_fim_pb_batch
):
batch = CausalLMBatch.from_pb(
default_fim_pb_batch,
default_santacoder.tokenizer,
default_santacoder.dtype,
default_santacoder.device,
)
next_batch = batch
for _ in range(batch.stopping_criterias[0].max_new_tokens - 1):
generations, next_batch, _ = default_santacoder.generate_token(next_batch)
assert len(generations) == len(next_batch)
generations, next_batch, _ = default_santacoder.generate_token(next_batch)
assert next_batch is None
assert len(generations) == 1
assert (
generations[0].generated_text.text
== """ineProperty(exports, "__esModule", { value"""
)
assert generations[0].request_id == batch.requests[0].id
assert (
generations[0].generated_text.generated_tokens
== batch.stopping_criterias[0].max_new_tokens
)
| text-generation-inference/server/tests/models/test_santacoder.py/0 | {
"file_path": "text-generation-inference/server/tests/models/test_santacoder.py",
"repo_id": "text-generation-inference",
"token_count": 1480
} |
import torch
import grpc
from google.rpc import status_pb2, code_pb2
from grpc_status import rpc_status
from grpc_interceptor.server import AsyncServerInterceptor
from loguru import logger
from typing import Callable, Any
class ExceptionInterceptor(AsyncServerInterceptor):
def __init__(self, shutdown_callback):
self.shutdown_callback = shutdown_callback
async def intercept(
self,
method: Callable,
request_or_iterator: Any,
context: grpc.ServicerContext,
method_name: str,
) -> Any:
try:
response = method(request_or_iterator, context)
return await response
except Exception as err:
method_name = method_name.split("/")[-1]
logger.exception(f"Method {method_name} encountered an error.")
# Runtime Error cannot be recovered from
if isinstance(err, RuntimeError):
self.shutdown_callback()
if torch.cuda.is_available():
torch.cuda.empty_cache()
await context.abort_with_status(
rpc_status.to_status(
status_pb2.Status(code=code_pb2.INTERNAL, message=str(err))
)
)
| text-generation-inference/server/text_generation_server/interceptor.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/interceptor.py",
"repo_id": "text-generation-inference",
"token_count": 545
} |
from typing import Any, Dict, List, Union
from compressed_tensors import QuantizationConfig, QuantizationStatus
from compressed_tensors.config import CompressionFormat
from compressed_tensors.quantization import (
QuantizationScheme,
QuantizationType,
find_name_or_class_matches,
)
from loguru import logger
from pydantic import ValidationError
from torch import nn
from text_generation_server.layers.compressed_tensors.w8an_fp import W8ANFpLoader
from text_generation_server.layers.compressed_tensors.w8a8_int import W8A8IntLoader
from text_generation_server.layers.compressed_tensors.wna16_int_24 import (
WNA16Int24Loader,
)
from text_generation_server.layers.compressed_tensors.wna16_int import WNA16IntLoader
from text_generation_server.utils.log import log_once
from text_generation_server.utils.weights import (
DefaultWeightsLoader,
UnquantizedWeight,
Weights,
WeightsLoader,
)
# compressed-tensors can match modules as quantization targets. However,
# they need to be objects rather than classes or class names. Since we
# need to match `Linear` targets, make an instance that can be re-used.
_EMPTY_LINEAR: nn.Module = nn.Linear(0, 0)
class CompressedTensorsLoader(WeightsLoader):
"""Loader for checkpoints stored in the compressed-tensors format."""
def __init__(self, config: Dict[str, Any]):
quantization_config_raw = config.get("quantization_config")
if quantization_config_raw is None:
# `compression_config` was renamed to `quantization_config`; support
# retained for backward compatibility.
quantization_config_raw = config.get("compression_config")
if quantization_config_raw is None:
raise ValueError(
"Checkpoint does not have compressed-tensors configuration"
)
try:
quantization_config = QuantizationConfig.model_validate(
quantization_config_raw
)
except ValidationError as e:
raise ValueError("Cannot parse compressed-tensors configuration") from e
if quantization_config.quantization_status not in (
QuantizationStatus.COMPRESSED,
QuantizationStatus.FROZEN,
):
raise ValueError(
f"Model quantization was not finished, status was: {quantization_config.quantization_status}"
)
self.ignore = (
quantization_config.ignore if quantization_config.ignore is not None else []
)
self.loaders = self._get_target_loaders(quantization_config)
for target, loader in self.loaders.items():
log_once(
logger.info,
f"Using {loader} for compressed-tensors target '{target}'",
)
def get_weights(self, weights: Weights, prefix: str):
loader = self._lookup_loader(prefix)
return loader.get_weights(weights, prefix)
def get_weights_col_packed(
self,
weights: "Weights",
prefix: str,
block_sizes: Union[int, List[int]],
):
loader = self._lookup_loader(prefix)
return loader.get_weights_col_packed(weights, prefix, block_sizes)
def get_multi_weights_col(self, weights: Weights, prefixes: List[str], dim: int):
loader = self._lookup_loader(prefixes[0])
return loader.get_multi_weights_col(weights, prefixes, dim)
def get_weights_row(self, weights: Weights, prefix: str):
loader = self._lookup_loader(prefix)
return loader.get_weights_row(weights, prefix)
def _get_target_loaders(
self, quantization_config: QuantizationConfig
) -> Dict[str, WeightsLoader]:
"""
A compressed-tensors checkpoint can use different quantizations
for different targets. This method returns a dictionary with a
loader per target.
"""
loaders: Dict[str, WeightsLoader] = {}
format = quantization_config.format
for group_name, group in quantization_config.config_groups.items():
# The group configuration can be a string, but does that ever
# happen in a serialized quantization config?
assert isinstance(group, QuantizationScheme)
loader = self._create_loader_for_group(format, group_name, group)
# A quantized parameter group can have multiple targets, add the
# loader for all the targets.
for target in group.targets:
if target in loaders:
raise ValueError(
f"Target '{target} has multiple configured loaders'"
)
loaders[target] = loader
return loaders
def _create_loader_for_group(
self, format: str, group_name: str, group: QuantizationScheme
) -> WeightsLoader:
"""
Find and create a loader for the group with the given quantization
scheme.
"""
# NOTE: we ignore group.output_activations because we don't support
# output quantization yet.
input_activations = group.input_activations
weights = group.weights
if (
format
in {
CompressionFormat.float_quantized.value,
CompressionFormat.naive_quantized.value,
}
and weights is not None
and weights.type == QuantizationType.FLOAT
and weights.num_bits == 8
):
# FP W8A8 or W8A16.
return W8ANFpLoader(input_activations=input_activations, weights=weights)
elif (
format == CompressionFormat.pack_quantized.value
and weights is not None
and weights.type == QuantizationType.INT
and weights.num_bits in (4, 8)
):
# INT W4A16 or W8A16 (GPTQ/AWQ-like).
return WNA16IntLoader(weights)
elif (
format == CompressionFormat.marlin_24.value
and weights is not None
and weights.type == QuantizationType.INT
and weights.num_bits in (4, 8)
):
return WNA16Int24Loader(weights)
elif (
format
in {
CompressionFormat.int_quantized.value,
CompressionFormat.naive_quantized.value,
}
and weights is not None
and weights.type == QuantizationType.INT
and weights.num_bits == 8
):
return W8A8IntLoader(input_args=input_activations, weight_args=weights)
else:
raise ValueError(
f"Group '{group_name}' has unsupported compressed-tensors configurtion"
)
def _lookup_loader(self, prefix: str) -> WeightsLoader:
"""
Look up the loader to use for a given parameter name (prefix).
"""
if len(find_name_or_class_matches(prefix, _EMPTY_LINEAR, self.ignore)) > 0:
return DefaultWeightsLoader(UnquantizedWeight)
# We currently only handle linear layers, so unconditionally pass
# a `Linear` instance.
targets = find_name_or_class_matches(prefix, _EMPTY_LINEAR, self.loaders.keys())
if len(targets) == 0:
raise ValueError(
f"Cannot find compressed-tensors target for prefix: {prefix}"
)
return self.loaders[targets[0]]
| text-generation-inference/server/text_generation_server/layers/compressed_tensors/loader.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/layers/compressed_tensors/loader.py",
"repo_id": "text-generation-inference",
"token_count": 3149
} |
import torch
# copied from https://github.com/openppl-public/ppq/blob/master/ppq/quantization/measure/norm.py
def torch_snr_error(
y_pred: torch.Tensor, y_real: torch.Tensor, reduction: str = "mean"
) -> torch.Tensor:
"""
Compute SNR between y_pred(tensor) and y_real(tensor)
SNR can be calcualted as following equation:
SNR(pred, real) = (pred - real) ^ 2 / (real) ^ 2
if x and y are matrixs, SNR error over matrix should be the mean value of SNR error over all elements.
SNR(pred, real) = mean((pred - real) ^ 2 / (real) ^ 2)
Args:
y_pred (torch.Tensor): _description_
y_real (torch.Tensor): _description_
reduction (str, optional): _description_. Defaults to 'mean'.
Raises:
ValueError: _description_
ValueError: _description_
Returns:
torch.Tensor: _description_
"""
if y_pred.shape != y_real.shape:
raise ValueError(
f"Can not compute snr loss for tensors with different shape. "
f"({y_pred.shape} and {y_real.shape})"
)
reduction = str(reduction).lower()
if y_pred.ndim == 1:
y_pred = y_pred.unsqueeze(0)
y_real = y_real.unsqueeze(0)
y_pred = y_pred.flatten(start_dim=1)
y_real = y_real.flatten(start_dim=1)
noise_power = torch.pow(y_pred - y_real, 2).sum(dim=-1)
signal_power = torch.pow(y_real, 2).sum(dim=-1)
snr = (noise_power) / (signal_power + 1e-7)
if reduction == "mean":
return torch.mean(snr)
elif reduction == "sum":
return torch.sum(snr)
elif reduction == "none":
return snr
else:
raise ValueError("Unsupported reduction method.")
| text-generation-inference/server/text_generation_server/layers/gptq/utils.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/layers/gptq/utils.py",
"repo_id": "text-generation-inference",
"token_count": 742
} |
import os
import math
import torch
from torch import nn
from text_generation_server.utils.import_utils import SYSTEM
if SYSTEM == "cuda":
import rotary_emb
elif SYSTEM == "rocm":
import vllm._custom_ops as ops
elif SYSTEM == "ipex":
import intel_extension_for_pytorch as ipex
def _create_inv_freq(dim, base, device):
inv_freq = 1.0 / (
base ** (torch.arange(0, dim, 2, device=device, dtype=torch.float32) / dim)
)
return inv_freq
def _get_rope_config(config):
if os.getenv("ROPE_SCALING", None) is not None:
rope_scaling = {
"type": os.environ["ROPE_SCALING"],
"factor": float(os.environ["ROPE_FACTOR"]),
}
return rope_scaling
return getattr(config, "rope_scaling", None)
class PositionRotaryEmbedding(nn.Module):
def __init__(self, inv_freq, scaling_factor):
super().__init__()
self.inv_freq = inv_freq
self._seq_len_cached = 0
self._cos_cached = None
self._sin_cached = None
self._cos_k_cached = None
self._sin_k_cached = None
self.scaling_factor = scaling_factor
self.dynamic_args = None
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
cos: torch.Tensor,
sin: torch.Tensor,
):
# Such controlflows may add some overhead.
if SYSTEM == "cuda":
rotary_dim = cos.shape[-1]
q1 = query[..., :rotary_dim]
q2 = query[..., rotary_dim : 2 * rotary_dim]
rotary_emb.apply_rotary(q1, q2, cos, sin, q1, q2, False)
k1 = key[..., :rotary_dim]
k2 = key[..., rotary_dim : 2 * rotary_dim]
rotary_emb.apply_rotary(k1, k2, cos, sin, k1, k2, False)
elif SYSTEM == "rocm":
# NOTE: On RoCm systems, we use a ROPE implementatation adapted from VLLM which launches a single kernel for both query/key, contrary to flash-attn implementation used on NVIDIA systems.
# Compiling flash-attn rotary on RoCm, it appears hipcc is unable to unroll loops, resulting in an even slower inference compared to eager: https://github.com/pytorch/pytorch/issues/113773
head_size = query.shape[-1]
# Inplace operation, updating query and key.
ops.rotary_embedding(query, key, head_size, cos, sin, True)
elif SYSTEM == "ipex":
ipex.llm.functional.rotary_embedding(
query, key, sin, cos, query.size(-1), True
)
else:
raise ValueError(
"Your system seem to be not supported. Please check your install or open an issue at https://github.com/huggingface/text-generation-inference/issues with a clear reproduction."
)
@classmethod
def static(cls, config, dim, base, device):
inv_freq = _create_inv_freq(dim, base, device)
scaling_factor = None
rope_scaling = _get_rope_config(config)
if rope_scaling is not None:
# `rope_type` is now standard in transformers, but some existing models
# have `type` instead.
rope_type = rope_scaling.get("rope_type", rope_scaling.get("type", None))
mrope_section = rope_scaling.get("mrope_section", None)
if rope_type == "linear":
pass
elif rope_type == "default":
pass
elif rope_type == "mrope":
mrope_section = rope_scaling["mrope_section"]
if mrope_section is not None:
return RotaryPositionEmbeddingMultimodalSections(
inv_freq, scaling_factor, mrope_section
)
elif rope_type == "dynamic":
scaling_factor = rope_scaling["factor"]
return DynamicPositionRotaryEmbedding(
dim=dim,
max_position_embeddings=config.max_position_embeddings,
base=base,
device=inv_freq.device,
scaling_factor=scaling_factor,
)
elif rope_type == "llama3":
inv_freq = apply_llama3_scaling(
inv_freq,
scaling_factor=rope_scaling["factor"],
low_freq_factor=rope_scaling["low_freq_factor"],
high_freq_factor=rope_scaling["high_freq_factor"],
original_max_position_embeddings=rope_scaling[
"original_max_position_embeddings"
],
)
return cls(inv_freq, scaling_factor)
elif rope_type == "yarn":
scaling_factor = rope_scaling["factor"]
mscale = rope_scaling.get("mscale", 1.0)
mscale_all_dim = rope_scaling.get("mscale_all_dim", 0.0)
return YarnPositionRotaryEmbedding(
dim=2 * inv_freq.shape[0],
max_position_embeddings=rope_scaling[
"original_max_position_embeddings"
],
base=base,
device=inv_freq.device,
scaling_factor=scaling_factor,
extrapolation_factor=1,
attn_factor=1,
beta_fast=32,
beta_slow=1,
mscale=mscale,
mscale_all_dim=mscale_all_dim,
)
elif rope_type in ["su", "longrope"]:
short_factor = torch.tensor(
rope_scaling["short_factor"], dtype=torch.float32, device=device
)
short_inv_freq = 1.0 / (
short_factor
* base
** (
torch.arange(0, dim, 2, device=device, dtype=torch.float32)
/ dim
)
)
long_factor = torch.tensor(
rope_scaling["long_factor"], dtype=torch.float32, device=device
)
long_inv_freq = 1.0 / (
long_factor
* base
** (
torch.arange(0, dim, 2, device=device, dtype=torch.float32)
/ dim
)
)
original_max_position_embeddings = (
config.original_max_position_embeddings
)
max_position_embeddings = config.max_position_embeddings
if max_position_embeddings <= original_max_position_embeddings:
scaling_factor = 1.0
else:
scale = max_position_embeddings / original_max_position_embeddings
scaling_factor = math.sqrt(
1 + math.log(scale) / math.log(original_max_position_embeddings)
)
# if short_mscale and long_mscale are provided we need to scale the freqs
# using the Phi3LongRoPEScaledRotaryEmbedding
if ("short_mscale" in rope_scaling) and ("long_mscale" in rope_scaling):
short_mscale = rope_scaling["short_mscale"]
long_mscale = rope_scaling["long_mscale"]
return Phi3LongRoPEScaledRotaryEmbedding(
short_inv_freq=short_inv_freq,
long_inv_freq=long_inv_freq,
max_position_embeddings=config.max_position_embeddings,
short_mscale=short_mscale,
long_mscale=long_mscale,
original_max_position_embeddings=original_max_position_embeddings,
)
return SuRotaryEmbedding(
short_inv_freq=short_inv_freq,
long_inv_freq=long_inv_freq,
scaling_factor=scaling_factor,
original_max_position_embeddings=original_max_position_embeddings,
)
else:
raise NotImplementedError(
f"rope scaling type {rope_scaling['type']} is not implemented or invalid"
)
return cls(inv_freq, scaling_factor)
@classmethod
def load(cls, config, prefix, weights):
# XXX: Always load this in float32 !
dtype = weights.dtype
weights.dtype = torch.float32
inv_freq = weights.get_tensor(f"{prefix}.inv_freq")
weights.dtype = dtype
scaling_factor = None
rope_scaling = _get_rope_config(config)
if rope_scaling is not None:
scaling_factor = rope_scaling["factor"]
if rope_scaling["type"] == "linear":
pass
elif rope_scaling["type"] == "dynamic":
return DynamicPositionRotaryEmbedding(
dim=2 * inv_freq.shape[0],
max_position_embeddings=config.max_position_embeddings,
base=10000.0,
device=inv_freq.device,
scaling_factor=scaling_factor,
)
elif rope_scaling["type"] == "yarn":
mscale = rope_scaling.get("mscale", 1.0)
mscale_all_dim = rope_scaling.get("mscale_all_dim", 0.0)
return YarnPositionRotaryEmbedding(
dim=2 * inv_freq.shape[0],
max_position_embeddings=rope_scaling[
"original_max_position_embeddings"
],
base=10000.0,
device=inv_freq.device,
scaling_factor=scaling_factor,
extrapolation_factor=1,
attn_factor=1,
beta_fast=32,
beta_slow=1,
mscale=mscale,
mscale_all_dim=mscale_all_dim,
)
else:
raise NotImplementedError(
f"rope scaling type {rope_scaling['type']} is not implemented or invalid"
)
return cls(inv_freq, scaling_factor)
def _update_cos_sin_cache(self, dtype, device, seqlen):
# Reset the tables if the sequence length has changed,
# or if we're on a new device (possibly due to tracing for instance)
if (
seqlen > self._seq_len_cached
or self._cos_cached.device != device
or self._cos_cached.dtype != dtype
):
self._seq_len_cached = seqlen
t = torch.arange(seqlen, device=device, dtype=self.inv_freq.dtype)
if self.scaling_factor is not None:
t /= self.scaling_factor
# Don't do einsum, it converts fp32 to fp16
# freqs = torch.einsum("i,j->ij", t, self.inv_freq)
freqs = torch.outer(t, self.inv_freq.to(device=t.device))
self._cos_cached = torch.cos(freqs).to(dtype)
self._sin_cached = torch.sin(freqs).to(dtype)
def get_cos_sin(self, position_ids: torch.Tensor, max_s: int, dtype: torch.dtype):
"""
Return cos and sin for the asked position ids
"""
if SYSTEM == "rocm":
# For RoCm, we always use float cos/sin to avoid a cast.
# For NVIDIA, for some reason, the flash-attn rotary kernel requires cos/sin and query/key to be of same dtype: https://github.com/Dao-AILab/flash-attention/blob/017716451d446e464dde9aca3a3c1ed2209caaa9/csrc/rotary/rotary.cpp#L26
# But later on goes and cast cos/sin to float anyway: https://github.com/Dao-AILab/flash-attention/blob/017716451d446e464dde9aca3a3c1ed2209caaa9/csrc/rotary/rotary_cuda.cu#L29, which looks suboptimal.
dtype = torch.float32
self._update_cos_sin_cache(dtype, position_ids.device, max_s)
cos = torch.index_select(self._cos_cached, 0, position_ids)
sin = torch.index_select(self._sin_cached, 0, position_ids)
# Note: this unsqueeze is not necessary on RoCm + VLLM ROPE implementation, but we leave it as is to avoid yet an other controlflow.
return cos.unsqueeze(1), sin.unsqueeze(1)
class SuRotaryEmbedding(PositionRotaryEmbedding):
def __init__(
self,
short_inv_freq,
long_inv_freq,
scaling_factor,
original_max_position_embeddings,
):
super(PositionRotaryEmbedding, self).__init__()
self.short_inv_freq = short_inv_freq
self.long_inv_freq = long_inv_freq
self.scaling_factor = scaling_factor
self.original_max_position_embeddings = original_max_position_embeddings
self._seq_len_cached = 0
self._cos_cached = None
self._sin_cached = None
self._cos_k_cached = None
self._sin_k_cached = None
self.dynamic_args = None
def _update_cos_sin_cache(self, dtype, device, seqlen):
# Reset the tables if the sequence length has changed,
# or if we're on a new device (possibly due to tracing for instance)
if (
seqlen > self._seq_len_cached
or self._cos_cached is None
or self._cos_cached.device != device
or self._cos_cached.dtype != dtype
):
self._seq_len_cached = seqlen
t = torch.arange(seqlen, device=device, dtype=self.short_inv_freq.dtype)
short_freqs = torch.outer(
t[: self.original_max_position_embeddings],
self.short_inv_freq.to(device=t.device),
)
long_freqs = torch.outer(
t[self.original_max_position_embeddings :],
self.long_inv_freq.to(device=t.device),
)
freqs = torch.cat([short_freqs, long_freqs])
self._cos_cached = (torch.cos(freqs) * self.scaling_factor).to(dtype)
self._sin_cached = (torch.sin(freqs) * self.scaling_factor).to(dtype)
class Phi3LongRoPEScaledRotaryEmbedding(PositionRotaryEmbedding):
def __init__(
self,
short_inv_freq: torch.Tensor,
long_inv_freq: torch.Tensor,
max_position_embeddings: int,
short_mscale: float,
long_mscale: float,
original_max_position_embeddings: int,
):
super(PositionRotaryEmbedding, self).__init__()
self.short_inv_freq = short_inv_freq
self.long_inv_freq = long_inv_freq
self.max_position_embeddings = max_position_embeddings
self.short_mscale = short_mscale
self.long_mscale = long_mscale
self.original_max_position_embeddings = original_max_position_embeddings
# cache
self._seq_len_cached = 0
self._cos_cached = None
self._sin_cached = None
self._cos_k_cached = None
self._sin_k_cached = None
self.dynamic_args = None
def _update_cos_sin_cache(self, dtype, device, seqlen):
if (
seqlen > self._seq_len_cached
or self._cos_cached is None
or self._cos_cached.device != device
or self._cos_cached.dtype != dtype
):
self._seq_len_cached = seqlen
t = torch.arange(seqlen, device=device, dtype=self.short_inv_freq.dtype)
short_freqs = torch.outer(
t[: self.original_max_position_embeddings],
self.short_inv_freq.to(device=t.device),
)
long_freqs = torch.outer(
t[self.original_max_position_embeddings :],
self.long_inv_freq.to(device=t.device),
)
short_freqs = short_freqs * self.short_mscale
long_freqs = long_freqs * self.long_mscale
freqs = torch.empty((seqlen, short_freqs.shape[1]), device=device)
freqs[: self.original_max_position_embeddings] = short_freqs
freqs[self.original_max_position_embeddings :] = long_freqs
self._cos_cached = torch.cos(freqs).to(dtype)
self._sin_cached = torch.sin(freqs).to(dtype)
class DynamicPositionRotaryEmbedding(PositionRotaryEmbedding):
def __init__(self, dim, max_position_embeddings, base, device, scaling_factor):
inv_freq = _create_inv_freq(dim, base, device)
super().__init__(inv_freq, scaling_factor)
self.dim = dim
self.max_position_embeddings = max_position_embeddings
self.base = base
def _update_cos_sin_cache(self, dtype, device, seqlen):
# Reset the tables if the sequence length has changed,
# or if we're on a new device (possibly due to tracing for instance)
if (
seqlen > self._seq_len_cached
or self._cos_cached.device != device
or self._cos_cached.dtype != dtype
):
if seqlen > self.max_position_embeddings:
newbase = self.base * (
(self.scaling_factor * seqlen / self.max_position_embeddings)
- (self.scaling_factor - 1)
) ** (self.dim / (self.dim - 2))
self.inv_freq = _create_inv_freq(
self.dim, newbase, self.inv_freq.device
)
self._seq_len_cached = seqlen
t = torch.arange(seqlen, device=device, dtype=self.inv_freq.dtype)
# Don't do einsum, it converts fp32 to fp16
# freqs = torch.einsum("i,j->ij", t, self.inv_freq)
freqs = torch.outer(t, self.inv_freq.to(device=t.device))
self._cos_cached = torch.cos(freqs).to(dtype)
self._sin_cached = torch.sin(freqs).to(dtype)
def find_correction_dim(num_rotations, dim, base=10000, max_position_embeddings=2048):
return (dim * math.log(max_position_embeddings / (num_rotations * 2 * math.pi))) / (
2 * math.log(base)
)
# Find dim range bounds based on rotations
def find_correction_range(
low_rot, high_rot, dim, base=10000, max_position_embeddings=2048
):
low = math.floor(find_correction_dim(low_rot, dim, base, max_position_embeddings))
high = math.ceil(find_correction_dim(high_rot, dim, base, max_position_embeddings))
return max(low, 0), min(high, dim - 1) # Clamp values just in case
def linear_ramp_mask(min, max, dim):
if min == max:
max += 0.001 # Prevent singularity
linear_func = (torch.arange(dim, dtype=torch.float32) - min) / (max - min)
ramp_func = torch.clamp(linear_func, 0, 1)
return ramp_func
def get_mscale(scale: float = 1.0, mscale: float = 1.0):
if scale <= 1:
return 1.0
return 0.1 * mscale * math.log(scale) + 1.0
class YarnPositionRotaryEmbedding(PositionRotaryEmbedding):
def __init__(
self,
dim,
max_position_embeddings,
base,
device,
scaling_factor,
*,
extrapolation_factor,
attn_factor,
beta_fast,
beta_slow,
mscale: float,
mscale_all_dim: float,
):
inv_freq = _create_inv_freq(dim, base, device)
super().__init__(inv_freq, scaling_factor)
self.dim = dim
self.max_position_embeddings = max_position_embeddings
self.base = base
self.extrapolation_factor = extrapolation_factor
self.attn_factor = attn_factor
self.beta_fast = beta_fast
self.beta_slow = beta_slow
self.mscale_all_dim = mscale_all_dim
self.scaling_factor = scaling_factor
self.mscale = float(
get_mscale(self.scaling_factor, mscale)
/ get_mscale(self.scaling_factor, mscale_all_dim)
* self.attn_factor
) # Get n-d magnitude scaling corrected for interpolation
def _update_cos_sin_cache(self, dtype, device, seqlen):
# Reset the tables if the sequence length has changed,
# or if we're on a new device (possibly due to tracing for instance)
if (
seqlen > self._seq_len_cached
or self._cos_cached.device != device
or self._cos_cached.dtype != dtype
):
if seqlen > self.max_position_embeddings or True:
inv_freq_extrapolation = _create_inv_freq(
self.dim, self.base, self.inv_freq.device
)
freqs = 1.0 / inv_freq_extrapolation
inv_freq_interpolation = 1.0 / (self.scaling_factor * freqs)
low, high = find_correction_range(
self.beta_fast,
self.beta_slow,
self.dim,
self.base,
self.max_position_embeddings,
)
inv_freq_mask = (
1 - linear_ramp_mask(low, high, self.dim // 2).float().to(device)
) * self.extrapolation_factor # Get n-d rotational scaling corrected for extrapolation
inv_freq = (
inv_freq_interpolation * (1 - inv_freq_mask)
+ inv_freq_extrapolation * inv_freq_mask
)
self.inv_freq = inv_freq
self._seq_len_cached = seqlen
t = torch.arange(seqlen, device=device, dtype=self.inv_freq.dtype)
# Don't do einsum, it converts fp32 to fp16
# freqs = torch.einsum("i,j->ij", t, self.inv_freq)
freqs = torch.outer(t, self.inv_freq.to(device=t.device))
self._cos_cached = (torch.cos(freqs) * self.mscale).to(dtype)
self._sin_cached = (torch.sin(freqs) * self.mscale).to(dtype)
def apply_llama3_scaling(
freqs: torch.Tensor,
*,
scaling_factor: int,
low_freq_factor: int,
high_freq_factor: int,
original_max_position_embeddings: int,
):
low_freq_wavelen = original_max_position_embeddings / low_freq_factor
high_freq_wavelen = original_max_position_embeddings / high_freq_factor
new_freqs = []
for freq in freqs:
wavelen = 2 * math.pi / freq
if wavelen < high_freq_wavelen:
new_freqs.append(freq)
elif wavelen > low_freq_wavelen:
new_freqs.append(freq / scaling_factor)
else:
assert low_freq_wavelen != high_freq_wavelen
smooth = (original_max_position_embeddings / wavelen - low_freq_factor) / (
high_freq_factor - low_freq_factor
)
new_freqs.append((1 - smooth) * freq / scaling_factor + smooth * freq)
return torch.tensor(new_freqs, dtype=freqs.dtype, device=freqs.device)
class RotaryPositionEmbeddingMultimodalSections(PositionRotaryEmbedding):
def __init__(self, inv_freq: torch.Tensor, scaling_factor: float, sections: list):
super().__init__(inv_freq, scaling_factor)
self.sections = sections
self._cos_cached = None
self._sin_cached = None
self.section_indices = (
torch.arange(len(self.sections))
.repeat_interleave(torch.tensor(self.sections))
.view(1, 1, -1)
.to(inv_freq.device)
)
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
cos: torch.Tensor,
sin: torch.Tensor,
):
# rotate half the sequence length
rot = cos.shape[-1] // 2
q2 = torch.cat([-query[..., rot:], query[..., :rot]], dim=-1)
k2 = torch.cat([-key[..., rot:], key[..., :rot]], dim=-1)
# apply the rotation
rotary_emb.apply_rotary(query, q2, cos, sin, query, q2, True)
rotary_emb.apply_rotary(key, k2, cos, sin, key, k2, True)
def _update_cos_sin_cache(
self, dtype: torch.dtype, device: torch.device, seqlen: int
):
# always cache the cos/sin for the full sequence length to avoid
# recomputing if the sequence length is smaller than the cached one
if (
seqlen > self._seq_len_cached
or self._cos_cached.device != device
or self._cos_cached.dtype != dtype
):
self._seq_len_cached = seqlen
t = torch.arange(seqlen, device=device, dtype=self.inv_freq.dtype)
freqs = torch.outer(t, self.inv_freq.to(device=t.device))
self._cos_cached = torch.cos(freqs).to(dtype)
self._sin_cached = torch.sin(freqs).to(dtype)
self._sections = self.section_indices.expand(seqlen, -1, -1)
def get_cos_sin(
self,
position_ids: torch.Tensor,
max_s: int,
dtype: torch.dtype,
):
self._update_cos_sin_cache(dtype, position_ids.device, max_s)
slen = position_ids.shape[0]
cos = self._cos_cached[position_ids].gather(1, self._sections[:slen])
sin = self._sin_cached[position_ids].gather(1, self._sections[:slen])
cos = torch.cat([cos, cos], dim=-1)
sin = torch.cat([sin, sin], dim=-1)
return cos, sin
| text-generation-inference/server/text_generation_server/layers/rotary.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/layers/rotary.py",
"repo_id": "text-generation-inference",
"token_count": 12738
} |
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.distributed
from torch import nn
from transformers.activations import ACT2FN
from typing import Optional, List, Tuple
from text_generation_server.layers.attention.kv_cache import get_kv_scales
from text_generation_server.utils.import_utils import SYSTEM
from text_generation_server.layers.attention import (
paged_attention,
attention,
Seqlen,
)
from text_generation_server.layers import (
TensorParallelRowLinear,
TensorParallelColumnLinear,
TensorParallelEmbedding,
SpeculativeHead,
get_linear,
)
from text_generation_server.layers.rotary import (
PositionRotaryEmbedding,
)
from text_generation_server.layers.layernorm import (
FastLayerNorm,
)
def load_attention(config, prefix: str, weights):
return TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"],
dim=0,
weights=weights,
bias=False,
)
def load_row(config, prefix: str, weights, bias: bool):
weight = weights.get_weights_row(prefix)
if bias and weights.process_group.rank() == 0:
# Rank is only on the first rank process
bias = weights.get_tensor(f"{prefix}.bias")
else:
bias = None
linear = get_linear(weight, bias)
return TensorParallelRowLinear(linear, process_group=weights.process_group)
class GPTJRotary(PositionRotaryEmbedding):
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
cos: torch.Tensor,
sin: torch.Tensor,
):
# Such controlflows may add some overhead.
if SYSTEM == "cuda":
import rotary_emb
q1 = query[..., ::2]
q2 = query[..., 1::2]
rotary_emb.apply_rotary(q1, q2, cos, sin, q1, q2, False)
k1 = key[..., ::2]
k2 = key[..., 1::2]
rotary_emb.apply_rotary(k1, k2, cos, sin, k1, k2, False)
elif SYSTEM == "rocm":
import vllm._custom_ops as ops
# NOTE: On RoCm systems, we use a ROPE implementatation adapted from VLLM which launches a single kernel for both query/key, contrary to flash-attn implementation used on NVIDIA systems.
# Compiling flash-attn rotary on RoCm, it appears hipcc is unable to unroll loops, resulting in an even slower inference compared to eager: https://github.com/pytorch/pytorch/issues/113773
head_size = query.shape[-1]
# Inplace operation, updating query and key.
ops.rotary_embedding(query, key, head_size, cos, sin, False)
elif SYSTEM == "ipex":
import intel_extension_for_pytorch as ipex
ipex.llm.functional.rotary_embedding(
query, key, sin, cos, query.size(-1), False
)
else:
raise ValueError(
"Your system seem to be not supported. Please check your install or open an issue at https://github.com/huggingface/text-generation-inference/issues with a clear reproduction."
)
class FlashGPTJAttention(torch.nn.Module):
def __init__(
self,
prefix: str,
config,
weights,
):
super().__init__()
self.num_heads = config.num_attention_heads
self.hidden_size = config.hidden_size
self.head_size = self.hidden_size // self.num_heads
self.softmax_scale = self.head_size**-0.5
self.rotary_dim = config.rotary_dim
if self.num_heads % weights.process_group.size() != 0:
raise ValueError(
f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} "
f"and `num_shards`: {weights.process_group.size()}"
)
self.num_heads = self.num_heads // weights.process_group.size()
self.query_key_value = load_attention(
config,
prefix=prefix,
weights=weights,
)
self.kv_scales = get_kv_scales(weights, f"{prefix}")
self.o_proj = load_row(
config,
prefix=f"{prefix}.out_proj",
weights=weights,
bias=False,
)
self.kv_head_mapping = torch.arange(
0, self.num_heads, dtype=torch.int32, device=weights.device
)
self.rotary_emb = GPTJRotary.static(
config=config,
dim=self.rotary_dim,
base=10000,
device=weights.device,
)
def forward(
self,
hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
):
query, key, value = self.query_key_value(hidden_states).split(
self.head_size * self.num_heads, dim=1
)
query = query.view(-1, self.num_heads, self.head_size)
key = key.view(-1, self.num_heads, self.head_size)
value = value.view(-1, self.num_heads, self.head_size)
# Compute rotary embeddings on rotary_ndims
if self.rotary_dim is not None:
self.rotary_emb(
query[..., : self.rotary_dim], key[..., : self.rotary_dim], cos, sin
)
else:
self.rotary_emb(query, key, cos, sin)
kv_cache.store(
key=key,
value=value,
slots=slots,
kv_scales=self.kv_scales,
)
# Prefill
if cu_seqlen_prefill is not None:
# flash attention
attn_output = attention(
query=query,
key=key,
value=value,
kv_cache=kv_cache,
kv_scales=self.kv_scales,
seqlen=seqlen,
block_tables=block_tables,
softmax_scale=self.softmax_scale,
)
# Decode
else:
attn_output = paged_attention(
query,
kv_cache,
self.kv_head_mapping,
self.softmax_scale,
block_tables,
seqlen,
max_s,
kv_scales=self.kv_scales,
)
return self.o_proj(attn_output.view(-1, self.num_heads * self.head_size))
class GPTJMLP(nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
act = config.activation_function
self.act = (
ACT2FN[act]
if "gelu" not in act
else lambda x: torch.nn.functional.gelu(
x,
approximate=(
"tanh" if act in ["gelu_fast", "gelu_pytorch_tanh"] else "none"
),
)
)
self.fc_in = TensorParallelColumnLinear.load(
config, prefix=f"{prefix}.fc_in", weights=weights, bias=True
)
self.fc_out = load_row(
config,
prefix=f"{prefix}.fc_out",
weights=weights,
bias=True,
)
def forward(self, hidden_states):
hidden_states = self.fc_in(hidden_states)
hidden_states = self.act(hidden_states)
return self.fc_out(hidden_states)
class FlashGPTJLayer(nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
self.self_attn = FlashGPTJAttention(
prefix=f"{prefix}.attn", config=config, weights=weights
)
self.mlp = GPTJMLP(prefix=f"{prefix}.mlp", config=config, weights=weights)
self.input_layernorm = FastLayerNorm.load(
prefix=f"{prefix}.ln_1", weights=weights, eps=config.layer_norm_epsilon
)
def forward(
self,
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
):
hidden_states, residual = self.input_layernorm(hidden_states, residual)
# Self Attention
attn_output = self.self_attn(
hidden_states,
cos,
sin,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
)
feed_forward_hidden_states = self.mlp(hidden_states)
return attn_output + feed_forward_hidden_states, residual
class FlashGPTJModel(torch.nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
self.config = config
self.wte = TensorParallelEmbedding(prefix=f"{prefix}.wte", weights=weights)
self.layers = nn.ModuleList(
[
FlashGPTJLayer(
prefix=(
f"h.{layer_id}" if not prefix else f"{prefix}.h.{layer_id}"
),
config=config,
weights=weights,
)
for layer_id in range(config.num_hidden_layers)
]
)
self.ln_f = FastLayerNorm.load(
prefix="ln_f" if not prefix else f"{prefix}.ln_f",
weights=weights,
eps=config.layer_norm_epsilon,
)
self.gradient_checkpointing = False
self.head_size = self.layers[0].self_attn.head_size
self.num_heads = self.layers[0].self_attn.num_heads
def forward(
self,
input_ids: Optional[torch.LongTensor],
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
block_tables: torch.Tensor,
slots: torch.Tensor,
seqlen: Seqlen,
max_s: int,
prefill_cache_indices: Optional[torch.Tensor],
) -> torch.Tensor:
hidden_states = self.wte(input_ids)
# Get rotary cos and sin for this forward
# Avoid to index in each layer
cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin(
position_ids, max_s, hidden_states.dtype
)
residual = None
for i, layer in enumerate(self.layers):
hidden_states, residual = layer(
hidden_states,
residual,
cos,
sin,
cu_seqlen_prefill,
kv_cache[i],
block_tables,
slots,
seqlen,
max_s,
)
hidden_states, _ = self.ln_f(hidden_states, residual)
return hidden_states
class FlashGPTJForCausalLM(torch.nn.Module):
def __init__(self, prefix: str, config, weights):
super().__init__()
if not prefix:
prefix = "transformer"
else:
prefix = f"{prefix}.transformer"
self.model = FlashGPTJModel(prefix, config, weights)
self.lm_head = SpeculativeHead.load(
config,
prefix="lm_head",
weights=weights,
)
def forward(
self,
input_ids: torch.Tensor,
position_ids: torch.Tensor,
cu_seqlen_prefill: Optional[torch.Tensor],
kv_cache: List[Tuple[torch.Tensor, torch.Tensor]],
block_tables: torch.Tensor,
slots: torch.Tensor,
seqlen: Seqlen,
max_s: int,
prefill_cache_indices: Optional[torch.Tensor] = None,
lm_head_indices: Optional[torch.Tensor] = None,
adapter_data: Optional[torch.Tensor] = None,
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
hidden_states = self.model(
input_ids,
position_ids,
cu_seqlen_prefill,
kv_cache,
block_tables,
slots,
seqlen,
max_s,
prefill_cache_indices=prefill_cache_indices,
)
if lm_head_indices is not None:
hidden_states = hidden_states[lm_head_indices]
logits, speculative_logits = self.lm_head(hidden_states)
return logits, speculative_logits
| text-generation-inference/server/text_generation_server/models/custom_modeling/flash_gptj_modeling.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/flash_gptj_modeling.py",
"repo_id": "text-generation-inference",
"token_count": 6378
} |
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" PyTorch Idefics model."""
from typing import List, Optional, Tuple, Union
import torch
import torch.utils.checkpoint
from torch import nn
from transformers import PreTrainedModel
from transformers.activations import ACT2FN
from transformers.modeling_outputs import (
BaseModelOutputWithPast,
CausalLMOutputWithPast,
dataclass,
)
from text_generation_server.models.custom_modeling.idefics_config import IdeficsConfig
from text_generation_server.models.custom_modeling.idefics_vision import (
IdeficsVisionTransformer,
)
from text_generation_server.models.custom_modeling.idefics_perceiver import (
IdeficsPerceiverResampler,
)
from text_generation_server.layers import (
TensorParallelColumnLinear,
TensorParallelEmbedding,
TensorParallelRowLinear,
SpeculativeHead,
FastLinear,
)
from text_generation_server.layers.rotary import PositionRotaryEmbedding
from text_generation_server.utils.import_utils import SYSTEM
from loguru import logger
if SYSTEM == "cuda":
import dropout_layer_norm
elif SYSTEM == "rocm":
import vllm._custom_ops as ops
else:
dropout_layer_norm = None
@dataclass
class BaseModelOutputWithPastImage(BaseModelOutputWithPast):
image_hidden_states: Optional[torch.FloatTensor] = None
@dataclass
class CausalLMOutputWithPastImage(CausalLMOutputWithPast):
image_hidden_states: Optional[torch.FloatTensor] = None
# logger = logging.get_logger(__name__)
# _CONFIG_FOR_DOC = "IdeficsConfig"
# IDEFICS_PRETRAINED_MODEL_ARCHIVE_LIST = [
# "HuggingFaceM4/idefics-9b",
# "HuggingFaceM4/idefics-80b",
# # See all Idefics models at https://huggingface.co/models?filter=idefics
# ]
def expand_inputs_for_generation(
input_ids,
expand_size=1,
is_encoder_decoder=False,
attention_mask=None,
encoder_outputs=None,
**model_kwargs,
):
expanded_return_idx = (
torch.arange(input_ids.shape[0])
.view(-1, 1)
.repeat(1, expand_size)
.view(-1)
.to(input_ids.device)
)
input_ids = input_ids.index_select(0, expanded_return_idx)
if "token_type_ids" in model_kwargs:
token_type_ids = model_kwargs["token_type_ids"]
model_kwargs["token_type_ids"] = token_type_ids.index_select(
0, expanded_return_idx
)
if attention_mask is not None:
model_kwargs["attention_mask"] = attention_mask.index_select(
0, expanded_return_idx
)
model_kwargs["image_attention_mask"] = model_kwargs[
"image_attention_mask"
].index_select(0, expanded_return_idx)
model_kwargs["pixel_values"] = model_kwargs["pixel_values"].index_select(
0, expanded_return_idx
)
if is_encoder_decoder:
if encoder_outputs is None:
raise ValueError(
"If `is_encoder_decoder` is True, make sure that `encoder_outputs` is defined."
)
encoder_outputs["last_hidden_state"] = (
encoder_outputs.last_hidden_state.index_select(
0, expanded_return_idx.to(encoder_outputs.last_hidden_state.device)
)
)
model_kwargs["encoder_outputs"] = encoder_outputs
return input_ids, model_kwargs
def update_model_kwargs_for_generation(outputs, model_kwargs, is_encoder_decoder=False):
# must have this key set to at least None
model_kwargs["past_key_values"] = model_kwargs.get("past_key_values", None)
# update past
if "past_key_values" in outputs:
model_kwargs["past"] = outputs.past_key_values
elif "mems" in outputs:
model_kwargs["past"] = outputs.mems
elif "past_buckets_states" in outputs:
model_kwargs["past"] = outputs.past_buckets_states
else:
model_kwargs["past"] = None
# update token_type_ids with last value
if "token_type_ids" in model_kwargs:
token_type_ids = model_kwargs["token_type_ids"]
model_kwargs["token_type_ids"] = torch.cat(
[token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1
)
# update attention masks
if not is_encoder_decoder:
if "attention_mask" in model_kwargs:
attention_mask = model_kwargs["attention_mask"]
model_kwargs["attention_mask"] = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))],
dim=-1,
)
if "image_attention_mask" in model_kwargs:
image_attention_mask = model_kwargs["image_attention_mask"]
last_mask = image_attention_mask[:, -1, :].unsqueeze(1)
model_kwargs["image_attention_mask"] = last_mask
return model_kwargs
def prepare_inputs_for_generation(input_ids, past=None, **kwargs):
token_type_ids = kwargs.get("token_type_ids", None)
# only last token for inputs_ids if past is defined in kwargs
if past:
input_ids = input_ids[:, -1].unsqueeze(-1)
if token_type_ids is not None:
token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
attention_mask = kwargs.get("attention_mask", None)
position_ids = kwargs.get("position_ids", None)
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if past:
position_ids = position_ids[:, -1].unsqueeze(-1)
pixel_values = kwargs.get("pixel_values", None)
image_attention_mask = kwargs.get("image_attention_mask", None)
# if pixel_values is None or image_attention_mask is None:
# raise ValueError("pixel values and image attention mask cannot be None")
return {
"input_ids": input_ids,
"past_key_values": past,
"use_cache": kwargs.get("use_cache"),
"position_ids": position_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"pixel_values": pixel_values,
"image_attention_mask": image_attention_mask,
}
def freeze_model(model, module_exceptions=[]):
mapping = {
"LayerNorm": nn.LayerNorm,
"Linear": nn.Linear,
"Embedding": nn.Embedding,
}
module_exceptions_mapped = [mapping[m] for m in module_exceptions]
for module in model.modules():
if module_exceptions and any(
[isinstance(module, t) for t in module_exceptions_mapped]
):
module.requires_grad_(
True
) # Explicitely setting it to true to avoid any mistakes
else:
module.requires_grad_(False)
return model
class IdeficsDecoupledPartialTPEmbedding(nn.Module):
def __init__(
self,
config,
weights,
):
super().__init__()
self.num_embeddings = config.vocab_size
self.weight = TensorParallelEmbedding(
prefix="model.embed_tokens", weights=weights
)
self.additional_weight = nn.Parameter(
weights.get_tensor("model.embed_tokens.additional_embedding.weight")
)
def forward(self, input_ids):
# Clone so that we don't modify the original input_ids later on
input_ids = input_ids.clone()
additional_vocab_indices = torch.where(input_ids >= self.num_embeddings)
input_ids_additional_vocab = input_ids[additional_vocab_indices]
additional_embeddings = torch.nn.functional.embedding(
input_ids_additional_vocab - self.num_embeddings, self.additional_weight
)
# for successful lookup replace input_ids with 0, the results of these will be discarded anyway
input_ids[additional_vocab_indices] = 0
full_vector = self.weight(input_ids)
# overwrite the records with high indices
full_vector[additional_vocab_indices] = additional_embeddings
return full_vector
class IdeficsDecoupledTensorParallelLinear(nn.Module):
# Derived from https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear
"""
Implements a decoupling of parameters to allow freezing (or not) a subset of the parameters. In practise, the
regular `weight` can be trained or frozen (i.e. `partially_freeze=True`), and if `out_additional_features` > 0,
then it will create `out_additional_features * in_features` additional parameters that are always trained. If
`out_additional_features=0`, then the module defaults back to the regular behavior of `nn.Linear`.
"""
def __init__(
self,
config,
weights,
) -> None:
super().__init__()
self.fc = SpeculativeHead.load(config=config, prefix="lm_head", weights=weights)
self.additional_fc = FastLinear.load(
config=config,
prefix="lm_head.additional_fc",
weights=weights,
bias=False,
)
def forward(self, input: torch.Tensor) -> torch.Tensor:
output, speculative_logits = self.fc(input)
additional_features = self.additional_fc(input)
output = torch.cat((output, additional_features), -1)
return output, speculative_logits
def extra_repr(self) -> str:
"""Overwriting `nn.Linear.extra_repr` to include new parameters."""
return "in_features={}, out_features={}, out_additional_features={}, bias={}, partially_freeze={}".format(
self.in_features,
self.out_features,
self.out_additional_features,
self.bias is not None,
self.partially_freeze,
)
# Copied from transformers.models.bart.modeling_bart._make_causal_mask
def _make_causal_mask(
input_ids_shape: torch.Size,
dtype: torch.dtype,
device: torch.device,
past_key_values_length: int = 0,
):
"""
Make causal mask used for bi-directional self-attention.
"""
bsz, tgt_len = input_ids_shape
mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device)
mask_cond = torch.arange(mask.size(-1), device=device)
mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
mask = mask.to(dtype)
if past_key_values_length > 0:
mask = torch.cat(
[
torch.zeros(
tgt_len, past_key_values_length, dtype=dtype, device=device
),
mask,
],
dim=-1,
)
return mask[None, None, :, :].expand(
bsz, 1, tgt_len, tgt_len + past_key_values_length
)
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
"""
Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
"""
bsz, src_len = mask.size()
tgt_len = tgt_len if tgt_len is not None else src_len
expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
inverted_mask = 1.0 - expanded_mask
return inverted_mask.masked_fill(
inverted_mask.to(torch.bool), torch.finfo(dtype).min
)
class IdeficsRMSNorm(nn.Module):
def __init__(self, prefix, weights, eps=1e-6):
"""
LlamaRMSNorm is equivalent to T5LayerNorm
"""
super().__init__()
weight = weights.get_tensor(f"{prefix}.weight")
self.weight = nn.Parameter(weight)
self.variance_epsilon = eps
def forward(self, hidden_states, residual=None):
if SYSTEM == "ipex":
import intel_extension_for_pytorch as ipex
out = ipex.llm.functional.add_rms_norm(
residual,
hidden_states,
self.weight,
None,
self.variance_epsilon,
residual is not None,
)
return out
elif hidden_states.shape[-1] > 8192:
if residual is not None:
hidden_states += residual
residual = hidden_states
hidden_states = hidden_states.to(torch.float32)
variance = hidden_states.pow(2).mean(-1, keepdim=True)
hidden_states = hidden_states * torch.rsqrt(
variance + self.variance_epsilon
)
# convert into half-precision if necessary
if self.weight.dtype in [torch.float16, torch.bfloat16]:
hidden_states = hidden_states.to(self.weight.dtype)
return self.weight * hidden_states
elif SYSTEM == "cuda":
# faster post attention rms norm
unwrap = False
if len(hidden_states.shape) > 2:
unwrap = True
shape = hidden_states.shape
hidden_states = hidden_states.reshape(-1, shape[-1])
normed_hidden_states, res, *rest = dropout_layer_norm.dropout_add_ln_fwd(
hidden_states,
residual,
self.weight,
None,
None,
None,
None,
None,
0.0,
self.variance_epsilon,
1.0,
0,
None,
False,
True, # Activate RMSNorm
)
if res is None:
res = hidden_states
if unwrap:
normed_hidden_states = normed_hidden_states.view(*shape)
return normed_hidden_states
elif SYSTEM == "rocm":
# We use VLLM RMSNorm kernel that can be compiled for RoCm, instead of Flash Attention ones that can not.
if residual is not None:
hidden_states += residual
residual = hidden_states
unwrap = False
if len(hidden_states.shape) > 2:
unwrap = True
shape = hidden_states.shape
hidden_states = hidden_states.reshape(-1, shape[-1])
out = torch.empty_like(hidden_states)
ops.rms_norm(
out,
hidden_states,
self.weight.data,
self.variance_epsilon,
)
if unwrap:
out = out.view(*shape)
return out
else:
raise ValueError(
"Your system seem to be not supported. Please check your install or open an issue at https://github.com/huggingface/text-generation-inference/issues with a clear reproduction."
)
# this was adapted from LlamaMLP
class IdeficsMLP(nn.Module):
def __init__(
self,
config,
prefix,
weights,
):
super().__init__()
self.gate_up_proj = TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.gate_proj", f"{prefix}.up_proj"],
weights=weights,
dim=0,
bias=False,
)
self.down_proj = TensorParallelRowLinear.load(
config,
prefix=f"{prefix}.down_proj",
weights=weights,
bias=False,
)
self.act_fn = ACT2FN[config.hidden_act]
def forward(self, hidden_states):
gate_up_states = self.gate_up_proj(hidden_states)
shape = gate_up_states.shape
gate_up_states = gate_up_states.view(*shape[:-1], 2, shape[-1] // 2)
return self.down_proj(
self.act_fn(gate_up_states[:, :, 0]) * gate_up_states[:, :, 1]
)
# this was adapted from LlamaAttention
class IdeficsAttention(nn.Module):
"""Multi-headed attention from 'Attention Is All You Need' paper"""
def __init__(
self,
config,
prefix,
weights,
qk_layer_norms: bool = False,
is_cross_attention: bool = False,
):
super().__init__()
self.hidden_size = config.hidden_size
self.num_heads = config.num_attention_heads
self.head_dim = self.hidden_size // self.num_heads
self.dropout = config.dropout
if (self.head_dim * self.num_heads) != self.hidden_size:
raise ValueError(
f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
f" and `num_heads`: {self.num_heads})."
)
self.is_cross_attention = is_cross_attention
# if not hasattr(nn.functional, "scaled_dot_product_attention"):
# raise ValueError("this model requires pytorch 2.0 or higher")
if self.num_heads % weights.process_group.size() != 0:
raise ValueError(
f"`num_heads` must be divisible by `num_shards` (got `num_heads`: {self.num_heads} "
f"and `num_shards`: {weights.process_group.size()}"
)
self.num_heads //= weights.process_group.size()
if self.is_cross_attention:
# kv_input_dim = (
# self.hidden_size if not hasattr(config.vision_config, "embed_dim") else config.vision_config.embed_dim
# )
self.q_proj = TensorParallelColumnLinear.load(
config, prefix=f"{prefix}.q_proj", weights=weights, bias=False
)
self.k_proj = TensorParallelColumnLinear.load(
config, prefix=f"{prefix}.k_proj", weights=weights, bias=False
)
self.v_proj = TensorParallelColumnLinear.load(
config, prefix=f"{prefix}.v_proj", weights=weights, bias=False
)
else:
self.qkv = TensorParallelColumnLinear.load_multi(
config,
prefixes=[f"{prefix}.q_proj", f"{prefix}.k_proj", f"{prefix}.v_proj"],
dim=0,
weights=weights,
bias=False,
)
self.o_proj = TensorParallelRowLinear.load(
config, prefix=f"{prefix}.o_proj", weights=weights, bias=False
)
self.rotary_emb = PositionRotaryEmbedding.static(
config=config, dim=self.head_dim, base=10000.0, device=weights.device
)
self.qk_layer_norms = qk_layer_norms
if self.qk_layer_norms:
self.q_layer_norm = IdeficsRMSNorm(
prefix=f"{prefix}.q_layer_norm",
weights=weights,
eps=config.rms_norm_eps,
)
self.k_layer_norm = IdeficsRMSNorm(
prefix=f"{prefix}.q_layer_norm",
weights=weights,
eps=config.rms_norm_eps,
)
def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
return (
tensor.view(bsz, seq_len, self.num_heads, self.head_dim)
.transpose(1, 2)
.contiguous()
)
def forward(
self,
hidden_states: torch.Tensor,
key_value_states: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: bool = False,
use_cache: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
# if key_value_states are provided this layer is used as a cross-attention layer
is_cross_attention = self.is_cross_attention or key_value_states is not None
bsz, q_len, _ = hidden_states.size()
if is_cross_attention:
query_states = self.q_proj(hidden_states).view(
bsz, q_len, self.num_heads, self.head_dim
) # .transpose(1, 2)
query_states = query_states.transpose(1, 2)
(
_,
kv_len,
_,
) = (
key_value_states.size()
) # Note that, in this case, `kv_len` == `kv_seq_len`
key_states = (
self.k_proj(key_value_states)
.view(bsz, kv_len, self.num_heads, self.head_dim)
.transpose(1, 2)
)
value_states = (
self.v_proj(key_value_states)
.view(bsz, kv_len, self.num_heads, self.head_dim)
.transpose(1, 2)
)
else:
qkv = self.qkv(hidden_states)
query_states, key_states, value_states = qkv.split(
self.num_heads * self.head_dim, dim=2
)
query_states = query_states.view(
bsz, q_len, self.num_heads, self.head_dim
) # .transpose(1, 2)
key_states = key_states.view(
bsz, q_len, self.num_heads, self.head_dim
) # . transpose(1, 2)
value_states = value_states.view(
bsz, q_len, self.num_heads, self.head_dim
) # .transpose(1, 2)
kv_seq_len = q_len
if past_key_value is not None:
kv_seq_len += past_key_value[0].shape[-2]
max_s = max(kv_seq_len, q_len)
cos, sin = self.rotary_emb.get_cos_sin(
position_ids.view(-1), max_s, hidden_states.dtype
)
query_shape = query_states.shape
key_shape = key_states.shape
self.rotary_emb(
query_states.view(-1, *query_shape[2:]),
key_states.reshape(-1, *key_shape[2:]),
cos,
sin,
)
query_states = query_states.view(query_shape)
key_states = key_states.view(key_shape)
query_states = query_states.transpose(1, 2)
key_states = key_states.transpose(1, 2)
value_states = value_states.transpose(1, 2)
kv_seq_len = key_states.shape[-2]
if past_key_value is not None:
kv_seq_len += past_key_value[0].shape[-2]
# [bsz, nh, t, hd]
if past_key_value is not None:
# reuse k, v, self_attention
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
past_key_value = (key_states, value_states) if use_cache else None
if self.qk_layer_norms:
query_states = self.q_layer_norm(query_states)
key_states = self.k_layer_norm(key_states)
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
)
attn_output = nn.functional.scaled_dot_product_attention(
query_states,
key_states,
value_states,
attn_mask=attention_mask,
dropout_p=self.dropout,
)
if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.transpose(1, 2)
attn_output = attn_output.reshape(bsz, q_len, -1)
attn_output = self.o_proj(attn_output)
attn_weights = None
if output_attentions:
logger.warning_once(
"attn_weights are not extracted in scaled_dot_product_attention. The model returns None instead"
)
return attn_output, attn_weights, past_key_value
# this was adapted from LlamaDecoderLayer
class IdeficsDecoderLayer(nn.Module):
def __init__(self, layer_id: int, config: IdeficsConfig, weights):
super().__init__()
self.process_group = weights.process_group
self.hidden_size = config.hidden_size
prefix = f"model.layers.{layer_id}"
self.self_attn = IdeficsAttention(
config=config,
prefix=f"{prefix}.self_attn",
weights=weights,
qk_layer_norms=False,
is_cross_attention=False,
)
self.mlp = IdeficsMLP(
config=config,
prefix=f"{prefix}.mlp",
weights=weights,
)
self.input_layernorm = IdeficsRMSNorm(
prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps
)
self.post_attention_layernorm = IdeficsRMSNorm(
prefix=f"{prefix}.post_attention_layernorm",
weights=weights,
eps=config.rms_norm_eps,
)
self.dropout = config.dropout
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
output_attentions: Optional[bool] = False,
use_cache: Optional[bool] = False,
) -> Tuple[
torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
use_cache (`bool`, *optional*):
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
(see `past_key_values`).
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
"""
residual = hidden_states
hidden_states = self.input_layernorm(hidden_states)
# Self Attention
hidden_states, self_attn_weights, present_key_value = self.self_attn(
hidden_states=hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
)
# hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
# Fully Connected
residual = hidden_states
hidden_states = self.post_attention_layernorm(hidden_states)
hidden_states = self.mlp(hidden_states)
# hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
outputs = (hidden_states,)
if output_attentions:
outputs += (self_attn_weights,)
if use_cache:
outputs += (present_key_value,)
return outputs
class IdeficsGatedCrossAttentionLayer(nn.Module):
def __init__(self, layer_id, config: IdeficsConfig, weights):
super().__init__()
self.process_group = weights.process_group
self.hidden_size = config.hidden_size
prefix = f"model.gated_cross_attn_layers.{layer_id}"
self.cross_attn = IdeficsAttention(
config=config,
prefix=f"{prefix}.cross_attn",
weights=weights,
qk_layer_norms=True,
is_cross_attention=True,
)
self.mlp = IdeficsMLP(
config=config,
prefix=f"{prefix}.mlp",
weights=weights,
)
self.input_layernorm = IdeficsRMSNorm(
prefix=f"{prefix}.input_layernorm", weights=weights, eps=config.rms_norm_eps
)
self.post_attention_layernorm = IdeficsRMSNorm(
prefix=f"{prefix}.post_attention_layernorm",
weights=weights,
eps=config.rms_norm_eps,
)
self.config = config.dropout
self.act_cross_attn = nn.Tanh()
self.act_dense = nn.Tanh()
self.alpha_cross_attn = nn.Parameter(
weights.get_tensor(f"{prefix}.alpha_cross_attn")
)
self.alpha_dense = nn.Parameter(weights.get_tensor(f"{prefix}.alpha_dense"))
if not (hasattr(self, "alpha_cross_attn") and hasattr(self, "alpha_dense")):
raise ValueError("Alpha parameters not initialized correctly!")
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
image_hidden_states: Optional[torch.Tensor] = None,
image_attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = False,
use_cache: Optional[bool] = False,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
no_images: Optional[bool] = False,
) -> Tuple[
torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]
]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
output_attentions (`bool`, *optional*):
Whether or not to return the attentions tensors of all attention layers. See `attentions` under
returned tensors for more detail.
use_cache (`bool`, *optional*):
If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
(see `past_key_values`).
past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
no_images (`bool`, *optional*, defaults to `False`): If `True` the vision part is ignored
"""
if image_hidden_states is None:
raise ValueError(
"`image_hidden_states` is required for Idefics cross attention module which are visual features to be"
" conditioned on."
)
if past_key_value is not None:
raise NotImplementedError(
"Past key value states are not implemented for Idefics cross attention module."
)
residual = hidden_states
hidden_states = self.input_layernorm(hidden_states)
# Self Attention
hidden_states, self_attn_weights, present_key_value = self.cross_attn(
hidden_states=hidden_states,
key_value_states=image_hidden_states,
attention_mask=image_attention_mask,
output_attentions=output_attentions,
)
# hidden_states = nn.functional.dropout(hidden_states, p=self.config, training=self.training)
# when there are no images the model is used in pure language mode
gate = 0 if no_images else 1
hidden_states = (
residual + gate * self.act_cross_attn(self.alpha_cross_attn) * hidden_states
)
# Fully Connected
residual = hidden_states
hidden_states = self.post_attention_layernorm(hidden_states)
hidden_states = self.mlp(hidden_states)
# hidden_states = nn.functional.dropout(hidden_states, p=self.config, training=self.training)
hidden_states = residual + self.act_dense(self.alpha_dense) * hidden_states
outputs = (hidden_states,)
if output_attentions:
outputs += (self_attn_weights,)
if use_cache:
outputs += (present_key_value,)
return outputs
LLAMA_START_DOCSTRING = r"""
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`IdeficsConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
"""
# @add_start_docstrings(
# "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
# LLAMA_START_DOCSTRING,
# )
class IdeficsPreTrainedModel(PreTrainedModel):
config_class = IdeficsConfig
# base_model_prefix = "model"
# supports_gradient_checkpointing = True
# _no_split_modules = ["IdeficsDecoderLayer", "IdeficsGatedCrossAttentionLayer"]
# def _init_weights(self, module):
# # important: this ported version of Idefics isn't meant for training from scratch - only
# # inference and fine-tuning - so the proper init weights code has been removed - the m4 code
# # base should be used for training from scratch and it contains the correct code.
# std = self.config.initializer_range
# if isinstance(module, nn.Linear):
# module.weight.data.normal_(mean=0.0, std=std)
# if module.bias is not None:
# module.bias.data.zero_()
# elif isinstance(module, nn.Embedding):
# module.weight.data.normal_(mean=0.0, std=std)
# if module.padding_idx is not None:
# module.weight.data[module.padding_idx].zero_()
# def _set_gradient_checkpointing(self, module, value=False):
# if isinstance(module, IdeficsModel):
# module.gradient_checkpointing = value
# LLAMA_INPUTS_DOCSTRING = r"""
# Args:
# input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
# Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
# it.
# Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
# [`PreTrainedTokenizer.__call__`] for details.
# [What are input IDs?](../glossary#input-ids)
# attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
# Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
# - 1 for tokens that are **not masked**,
# - 0 for tokens that are **masked**.
# [What are attention masks?](../glossary#attention-mask)
# Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
# [`PreTrainedTokenizer.__call__`] for details.
# If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
# `past_key_values`).
# If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
# and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
# information on the default strategy.
# - 1 indicates the head is **not masked**,
# - 0 indicates the head is **masked**.
# position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
# Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
# config.n_positions - 1]`. [What are position IDs?](../glossary#position-ids)
# past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
# Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
# `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
# `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
# Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
# blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
# If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
# don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
# `decoder_input_ids` of shape `(batch_size, sequence_length)`.
# inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
# Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
# is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
# model's internal embedding lookup matrix.
# use_cache (`bool`, *optional*):
# If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
# `past_key_values`).
# output_attentions (`bool`, *optional*):
# Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
# tensors for more detail.
# output_hidden_states (`bool`, *optional*):
# Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
# more detail.
# return_dict (`bool`, *optional*):
# Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
# """
# @add_start_docstrings(
# "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
# LLAMA_START_DOCSTRING,
# )
class IdeficsModel(IdeficsPreTrainedModel):
# """
# Transformer decoder consisting of `config.num_hidden_layers` layers. Each layer is a [`IdeficsDecoderLayer`]
# Args:
# config: IdeficsConfig
# """
def __init__(self, config: IdeficsConfig, weights):
super().__init__(config)
self.config = config
self.padding_idx = config.pad_token_id
self.vocab_size = config.vocab_size
self.embed_tokens = IdeficsDecoupledPartialTPEmbedding(
config=config,
weights=weights,
)
self.image_size = config.vision_config.image_size
self.vision_config = config.vision_config
self.vision_model = IdeficsVisionTransformer(
prefix="model.vision_model",
config=config.vision_config,
weights=weights,
)
# Perceiver Resampler
if config.use_resampler:
perceiver_config = config.perceiver_config
self.perceiver_resampler = IdeficsPerceiverResampler(
prefix="model.perceiver_resampler",
config=config,
embed_dim=config.vision_config.embed_dim,
depth=perceiver_config.resampler_depth,
n_heads=perceiver_config.resampler_n_heads,
head_dim=perceiver_config.resampler_head_dim,
n_latents=perceiver_config.resampler_n_latents,
weights=weights,
)
self.layers = nn.ModuleList(
[
IdeficsDecoderLayer(layer_id, config, weights)
for layer_id in range(config.num_hidden_layers)
]
)
self.cross_layer_interval = config.cross_layer_interval
num_cross_layers = config.num_hidden_layers // self.cross_layer_interval
self.gated_cross_attn_layers = nn.ModuleList(
[
IdeficsGatedCrossAttentionLayer(layer_id, config, weights)
for layer_id in range(num_cross_layers)
]
)
# self.gradient_checkpointing = False
self.norm = IdeficsRMSNorm(
prefix="model.norm", weights=weights, eps=config.rms_norm_eps
)
# self.gradient_checkpointing = False
# Initialize weights and apply final processing
# self.post_init()
# self.freeze_relevant_params(config)
# def freeze_relevant_params(self, config=None):
# if config is None:
# config = self.config
# if config.freeze_text_layers:
# self.freeze_text_layers(config.freeze_text_module_exceptions)
# if config.freeze_vision_layers:
# freeze_model(self.vision_model, module_exceptions=config.freeze_vision_module_exceptions)
# def freeze_text_layers(self, module_exceptions=[]):
# for module in [self.layers, self.norm]:
# freeze_model(module, module_exceptions=module_exceptions)
# def freeze_vision_layers(self, module_exceptions=[]):
# freeze_model(self.vision_model, module_exceptions=module_exceptions)
# def get_input_embeddings(self):
# return self.embed_tokens
# def set_input_embeddings(self, value):
# self.embed_tokens = value
# Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
def _prepare_decoder_attention_mask(
self, attention_mask, input_shape, inputs_embeds, past_key_values_length
):
# create causal mask
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
combined_attention_mask = None
if input_shape[-1] > 1:
combined_attention_mask = _make_causal_mask(
input_shape,
inputs_embeds.dtype,
device=inputs_embeds.device,
past_key_values_length=past_key_values_length,
)
if attention_mask is not None:
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
expanded_attn_mask = _expand_mask(
attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]
).to(inputs_embeds.device)
combined_attention_mask = (
expanded_attn_mask
if combined_attention_mask is None
else expanded_attn_mask + combined_attention_mask
)
return combined_attention_mask
# @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
image_hidden_states: Optional[torch.FloatTensor] = None,
image_embeddings: Optional[torch.FloatTensor] = None,
image_attention_mask: Optional[torch.Tensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, BaseModelOutputWithPastImage]:
device = input_ids.device if input_ids is not None else inputs_embeds.device
output_attentions = (
output_attentions
if output_attentions is not None
else self.config.output_attentions
)
output_hidden_states = (
output_hidden_states
if output_hidden_states is not None
else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
# retrieve input_ids and inputs_embeds
if input_ids is not None and inputs_embeds is not None:
raise ValueError(
"You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time"
)
elif input_ids is not None:
batch_size, seq_length = input_ids.shape
elif inputs_embeds is not None:
batch_size, seq_length, _ = inputs_embeds.shape
else:
raise ValueError(
"You have to specify either decoder_input_ids or decoder_inputs_embeds"
)
seq_length_with_past = seq_length
past_key_values_length = 0
if past_key_values is not None:
past_key_values_length = past_key_values[0][0].shape[2]
seq_length_with_past = seq_length_with_past + past_key_values_length
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
elif position_ids is None:
device = input_ids.device if input_ids is not None else inputs_embeds.device
position_ids = torch.arange(
past_key_values_length,
seq_length + past_key_values_length,
dtype=torch.long,
device=device,
)
position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
else:
position_ids = position_ids.view(-1, seq_length).long()
no_images = False
if image_hidden_states is None:
if pixel_values is None and image_embeddings is None:
raise ValueError(
"Either pixel_values and image_embeddings have to be not-None."
)
elif pixel_values is not None and image_embeddings is not None:
raise ValueError(
"You cannot specify both pixel_values and image_embeddings at the same time"
)
elif pixel_values is not None:
no_images = len(torch.nonzero(pixel_values)) == 0
pixel_values = pixel_values.to(
dtype=self.dtype, device=device
) # fp16 compatibility
batch_size, num_images = pixel_values.shape[:2]
pixel_values = pixel_values.contiguous().view(
batch_size * num_images, *pixel_values.shape[2:]
)
# Get sequence from the vision encoder
image_hidden_states = self.vision_model(
pixel_values=pixel_values
).last_hidden_state
elif image_embeddings is not None:
(
batch_size,
num_images,
image_seq_len,
image_hidden_size,
) = image_embeddings.size()
image_hidden_states = image_embeddings.to(
dtype=self.dtype, device=input_ids.device
)
image_hidden_states = image_hidden_states.view(
batch_size * num_images, image_seq_len, image_hidden_size
)
if self.config.use_resampler:
image_hidden_states = self.perceiver_resampler(image_hidden_states)
image_seq_len, image_hidden_size = image_hidden_states.size(
1
), image_hidden_states.size(2)
image_hidden_states = image_hidden_states.view(
batch_size, num_images * image_seq_len, image_hidden_size
)
else:
no_images = False
num_images = pixel_values.shape[1]
image_seq_len = image_hidden_states.shape[1] // num_images
# # Hack to use the model in full language modeling mode
# image_attention_mask = torch.zeros(batch_size, seq_length, 1, dtype=torch.long, device=image_hidden_states.device)
# Make image_attention_mask compatible with hidden states
text_seq_len = image_attention_mask.size(1)
image_attention_mask = image_attention_mask.unsqueeze(-1)
image_attention_mask = image_attention_mask.repeat(1, 1, 1, image_seq_len)
image_attention_mask = image_attention_mask.view(
batch_size, text_seq_len, num_images * image_seq_len
)
image_batch_size, image_sequence_length, _ = image_hidden_states.size()
image_hidden_shape = (image_batch_size, image_sequence_length)
if image_attention_mask is None:
image_attention_mask = torch.ones(image_hidden_shape, device=device)
image_attention_mask = self.invert_attention_mask(image_attention_mask)
# if list(image_attention_mask.shape) != [4, 1, 1024, 64]:
# raise ValueError(f"Image hidden_states {image_hidden_states.shape} - mask {image_attention_mask.shape} {num_images} {image_seq_len} {text_seq_len}")
# if image_hidden_states is not None:
# else:
# image_attention_mask = None
if inputs_embeds is None:
inputs_embeds = self.embed_tokens(input_ids)
# embed positions
if attention_mask is None:
attention_mask = torch.ones(
(batch_size, seq_length_with_past),
dtype=torch.bool,
device=inputs_embeds.device,
)
attention_mask = self._prepare_decoder_attention_mask(
attention_mask,
(batch_size, seq_length),
inputs_embeds,
past_key_values_length,
)
hidden_states = inputs_embeds
# if self.gradient_checkpointing and self.training:
# if use_cache:
# logger.warning_once(
# "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
# )
# use_cache = False
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
next_decoder_cache = () if use_cache else None
for idx, decoder_layer in enumerate(self.layers):
if output_hidden_states:
all_hidden_states += (hidden_states,)
past_key_value = (
past_key_values[idx] if past_key_values is not None else None
)
def vblock(
main_block,
hidden_states,
attention_mask,
position_ids,
past_key_value,
image_hidden_states,
image_attention_mask,
output_attentions,
use_cache,
no_images,
layer_idx,
cross_layer_interval,
gated_cross_attn_layers,
):
# TODO(ls): Add cross attention values to respective lists
if layer_idx % cross_layer_interval == 0:
xblock = gated_cross_attn_layers[layer_idx // cross_layer_interval]
outputs = xblock(
hidden_states,
attention_mask=attention_mask,
image_hidden_states=image_hidden_states,
image_attention_mask=image_attention_mask,
output_attentions=output_attentions,
use_cache=use_cache,
past_key_value=None, # not implemented
no_images=no_images,
)
hidden_states = outputs[0]
layer_outputs = main_block(
hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
)
return layer_outputs
# if self.gradient_checkpointing and self.training:
# past_key_value = None
# if use_cache:
# logger.warning_once(
# "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
# )
# use_cache = False
# layer_outputs = torch.utils.checkpoint.checkpoint(
# vblock,
# decoder_layer,
# hidden_states,
# attention_mask,
# position_ids,
# past_key_value,
# image_hidden_states,
# image_attention_mask,
# output_attentions,
# use_cache,
# no_images,
# idx,
# self.cross_layer_interval,
# self.gated_cross_attn_layers,
# )
# else:
layer_outputs = vblock(
decoder_layer,
hidden_states,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_value=past_key_value,
image_hidden_states=image_hidden_states,
image_attention_mask=image_attention_mask,
output_attentions=output_attentions,
use_cache=use_cache,
no_images=no_images,
layer_idx=idx,
cross_layer_interval=self.cross_layer_interval,
gated_cross_attn_layers=self.gated_cross_attn_layers,
)
hidden_states = layer_outputs[0]
if use_cache:
next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
if output_attentions:
all_self_attns += (layer_outputs[1],)
hidden_states = self.norm(hidden_states)
# add hidden states from the last decoder layer
if output_hidden_states:
all_hidden_states += (hidden_states,)
next_cache = next_decoder_cache if use_cache else None
if not return_dict:
return tuple(
v
for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
if v is not None
)
return BaseModelOutputWithPastImage(
last_hidden_state=hidden_states,
past_key_values=next_cache,
hidden_states=all_hidden_states,
attentions=all_self_attns,
image_hidden_states=image_hidden_states,
)
class IdeficsForVisionText2Text(IdeficsPreTrainedModel):
def __init__(
self,
config,
weights,
):
super().__init__(config)
self.model = IdeficsModel(
config=config,
weights=weights,
)
self.lm_head = IdeficsDecoupledTensorParallelLinear(
config=config,
weights=weights,
)
def forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[List[torch.FloatTensor]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
pixel_values: Optional[torch.FloatTensor] = None,
image_embeddings: Optional[torch.FloatTensor] = None,
image_hidden_states: Optional[torch.FloatTensor] = None,
image_attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.LongTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple, CausalLMOutputWithPastImage]:
r"""
Args:
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
Returns:
Example:
```python
>>> from transformers import AutoTokenizer, LlamaForCausalLM
>>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
>>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
>>> prompt = "Hey, are you consciours? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
```"""
output_attentions = (
output_attentions
if output_attentions is not None
else self.config.output_attentions
)
output_hidden_states = (
output_hidden_states
if output_hidden_states is not None
else self.config.output_hidden_states
)
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_values=past_key_values,
inputs_embeds=inputs_embeds,
pixel_values=pixel_values,
image_embeddings=image_embeddings,
image_hidden_states=image_hidden_states,
image_attention_mask=image_attention_mask,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = outputs[0]
logits, speculative_logits = self.lm_head(hidden_states)
loss = None
return (
CausalLMOutputWithPastImage(
loss=loss,
logits=logits,
past_key_values=outputs.past_key_values,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
image_hidden_states=outputs.image_hidden_states,
),
speculative_logits,
)
def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs):
inputs = prepare_inputs_for_generation(input_ids, past=past, **kwargs)
unwanted_kwargs = ["token_type_ids"]
for kwarg in unwanted_kwargs:
inputs.pop(kwarg, None)
return inputs
@staticmethod
def _expand_inputs_for_generation(
*args,
**model_kwargs,
):
return expand_inputs_for_generation(*args, **model_kwargs)
@staticmethod
def _update_model_kwargs_for_generation(
outputs, model_kwargs, is_encoder_decoder=False
):
return update_model_kwargs_for_generation(
outputs, model_kwargs, is_encoder_decoder=is_encoder_decoder
)
@staticmethod
def _reorder_cache(past, beam_idx):
reordered_past = ()
for layer_past in past:
reordered_past += (
tuple(
past_state.index_select(0, beam_idx) for past_state in layer_past
),
)
return reordered_past
| text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_modeling.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/models/custom_modeling/idefics_modeling.py",
"repo_id": "text-generation-inference",
"token_count": 28598
} |
import re
import torch
import torch.distributed
from transformers import (
PreTrainedTokenizerBase,
)
from text_generation_server.models.causal_lm import CausalLMBatch
from text_generation_server.pb import generate_pb2
from text_generation_server.utils import (
NextTokenChooser,
StoppingCriteria,
)
from text_generation_server.utils.chunks import concat_text_chunks
# CREDIT: Papers with code => https://github.com/paperswithcode/galai/blob/main/galai/utils.py
# we split individual characters inside special tokens like [START_DNA]
CUSTOM_SEQ_RE = re.compile(r"(\[START_(DNA|SMILES|I_SMILES|AMINO)])(.*?)(\[END_\2])")
# token added to implement a custom sequence tokenization. This token is added at
# corpus cleaning step and removed in pretokenization. The digits are added to increase the chance
# that they do not occur in the corpus. The digits are escaped so that the token does not appear
# literally in the source code in case we ever include it in the training data.
SPLIT_MARKER = f"SPL{1}T-TH{1}S-Pl3A5E"
def _insert_split_marker(m: re.Match):
"""
Applies split marker based on a regex match of special tokens such as
[START_DNA].
Parameters
----------
n : str
Input text to split
Returns
----------
str - the text with the split token added
"""
start_token, _, sequence, end_token = m.groups()
sequence = re.sub(r"(.)", rf"{SPLIT_MARKER}\1", sequence, flags=re.DOTALL)
return f"{start_token}{sequence}{SPLIT_MARKER}{end_token}"
def escape_custom_split_sequence(text):
"""
Applies custom splitting to the text for GALILEO's tokenization
Parameters
----------
text : str
Input text to split
Returns
----------
str - the text with the split token added
"""
return CUSTOM_SEQ_RE.sub(_insert_split_marker, text)
# END CREDIT
class GalacticaCausalLMBatch(CausalLMBatch):
@classmethod
def from_pb(
cls,
pb: generate_pb2.Batch,
tokenizer: PreTrainedTokenizerBase,
dtype: torch.dtype,
device: torch.device,
) -> "GalacticaCausalLMBatch":
inputs = []
next_token_choosers = []
stopping_criterias = []
prefix_offsets = []
top_n_tokens = []
read_offsets = []
requests_idx_mapping = {}
# Parse batch
max_truncation = 0
padding_right_offset = 0
max_decode_tokens = 0
for i, r in enumerate(pb.requests):
requests_idx_mapping[r.id] = i
# Add escape_custom_split_sequence to the CausalLMBatch logic
inputs.append(
escape_custom_split_sequence(concat_text_chunks(r.input_chunks.chunks))
)
next_token_choosers.append(
NextTokenChooser.from_pb(r.parameters, device, tokenizer)
)
stopping_criteria = StoppingCriteria.from_pb(
r.stopping_parameters, tokenizer
)
stopping_criterias.append(stopping_criteria)
top_n_tokens.append(r.top_n_tokens)
max_truncation = max(max_truncation, r.truncate)
max_decode_tokens += stopping_criteria.max_new_tokens
padding_right_offset = max(
padding_right_offset, stopping_criteria.max_new_tokens
)
tokenized_inputs = tokenizer(
inputs,
return_tensors="pt",
padding=True,
return_token_type_ids=False,
truncation=True,
max_length=max_truncation,
).to(device)
for _ in pb.requests:
input_len = tokenized_inputs["input_ids"].shape[1]
prefix_offsets.append(0)
read_offsets.append(input_len)
input_lengths = tokenized_inputs["attention_mask"].sum(1)
max_input_length = input_lengths.max()
input_ids = tokenized_inputs["input_ids"]
# Allocate maximum attention_mask
attention_mask = input_ids.new_zeros(
(pb.size, max_input_length + padding_right_offset)
)
# Copy tokenizer attention_mask into fully allocated attention_mask
attention_mask[:, :max_input_length] = tokenized_inputs["attention_mask"]
position_ids = tokenized_inputs["attention_mask"].long().cumsum(-1) - 1
position_ids.masked_fill_(tokenized_inputs["attention_mask"] == 0, 1)
all_input_ids = tokenized_inputs["input_ids"].T.split(1, dim=1)
top_n_tokens_tensor = torch.tensor(
top_n_tokens, device=device, dtype=torch.int64
)
max_tokens = len(inputs) * max_input_length + max_decode_tokens
return cls(
batch_id=pb.id,
requests=pb.requests,
requests_idx_mapping=requests_idx_mapping,
input_ids=input_ids,
attention_mask=attention_mask,
position_ids=position_ids,
past_key_values=None,
all_input_ids=list(all_input_ids),
input_lengths=input_lengths.tolist(),
prefix_offsets=prefix_offsets,
read_offsets=read_offsets,
next_token_choosers=next_token_choosers,
stopping_criterias=stopping_criterias,
top_n_tokens=top_n_tokens,
top_n_tokens_tensor=top_n_tokens_tensor,
max_input_length=max_input_length.item(),
padding_right_offset=padding_right_offset,
max_tokens=max_tokens,
)
| text-generation-inference/server/text_generation_server/models/galactica.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/models/galactica.py",
"repo_id": "text-generation-inference",
"token_count": 2499
} |
# Origin: https://github.com/predibase/lorax
# Path: lorax/server/lorax_server/utils/adapter.py
# License: Apache License Version 2.0, January 2004
import warnings
import re
from dataclasses import dataclass
from functools import lru_cache
from typing import TYPE_CHECKING, Set, Tuple, Optional, List
from safetensors.torch import load_file
from transformers import AutoConfig, AutoTokenizer, PreTrainedTokenizer
from text_generation_server.utils.merges.strategies import merge_adapters
from text_generation_server.utils import hub
from text_generation_server.adapters.lora import LoraConfig
if TYPE_CHECKING:
from text_generation_server.adapters.config import AdapterConfig, ModuleMap
BASE_MODEL_ADAPTER_ID = "__base_model__"
@dataclass
class AdapterInfo:
id: str
path: Optional[str]
revision: Optional[str] = None
@dataclass
class AdapterParameters:
adapter_info: Tuple[AdapterInfo]
weights: Tuple[float]
merge_strategy: NotImplemented
density: float
majority_sign_method: NotImplemented
@dataclass
class AdapterSource:
adapter_id: str
model_id: str
revision: str
def parse_lora_adapters(lora_adapters: Optional[str]) -> List[AdapterInfo]:
if not lora_adapters:
return []
adapter_list = []
for adapter in lora_adapters.split(","):
adapter = adapter.strip()
if adapter.count("=") > 1 or adapter.count("@") > 1:
raise ValueError(f"Invalid LoRA adapter format: {adapter}")
match = re.match(r"^([^=@]+)(?:=([^@]+))?(?:@(.+))?$", adapter)
if match:
adapter_id, path, revision = match.groups()
adapter_list.append(
AdapterInfo(id=adapter_id, path=path, revision=revision)
)
else:
raise ValueError(f"Invalid LoRA adapter format: {adapter}")
return adapter_list
def load_and_merge_adapters(
model_id: str,
adapter_parameters: AdapterParameters,
adapter_index: int,
weight_names: Tuple[str],
trust_remote_code: bool = False,
) -> Tuple["ModuleMap", "AdapterConfig", Set[str], PreTrainedTokenizer]:
if len(adapter_parameters.adapter_info) == 1:
adapter = next(iter(adapter_parameters.adapter_info))
return load_module_map(
model_id,
adapter.revision,
adapter.id,
adapter.path,
weight_names,
trust_remote_code,
)
adapter_params = AdapterParametersContainer(adapter_parameters, adapter_index)
return _load_and_merge(
model_id,
adapter_params,
weight_names,
trust_remote_code,
)
@dataclass
class AdapterParametersContainer:
adapter_parameters: AdapterParameters
adapter_index: int
def __hash__(self) -> int:
return self.adapter_index
@lru_cache(maxsize=32)
def _load_and_merge(
model_id: str,
adapter_params: AdapterParametersContainer,
weight_names: Tuple[str],
trust_remote_code: bool = False,
) -> Tuple["ModuleMap", "AdapterConfig", Set[str], PreTrainedTokenizer]:
params = adapter_params.adapter_parameters
adapters_to_merge = []
merged_weight_names = set()
tokenizer = None
for adapter in params.adapter_info:
if adapter.id == BASE_MODEL_ADAPTER_ID:
raise ValueError("Base model adapter cannot be merged.")
(
module_map,
adapter_config,
adapter_weight_names,
adapter_tokenizer,
) = load_module_map(
model_id,
adapter.revision,
adapter.id,
adapter.path,
weight_names,
trust_remote_code,
)
adapters_to_merge.append((module_map, adapter_config))
merged_weight_names = merged_weight_names.union(adapter_weight_names)
if tokenizer is None:
tokenizer = adapter_tokenizer
if len(adapters_to_merge) == 0:
raise ValueError("No adapters to merge.")
module_map, adapter_config = merge_adapters(adapters_to_merge, params)
return module_map, adapter_config, merged_weight_names, tokenizer
def check_architectures(
model_id: str,
adapter_id: str,
adapter_config: "AdapterConfig",
trust_remote_code: bool = False,
):
try:
if not adapter_config.base_model_name_or_path:
# Avoid execution latency caused by the network connection retrying for AutoConfig.from_pretrained(None)
return
expected_config = AutoConfig.from_pretrained(
model_id, trust_remote_code=trust_remote_code
)
model_config = AutoConfig.from_pretrained(
adapter_config.base_model_name_or_path, trust_remote_code=trust_remote_code
)
except Exception as e:
warnings.warn(
f"Unable to check architecture compatibility for adapter '{adapter_id}' "
f"against model '{model_id}'. Assuming they are compatible. Error: {e}"
)
return
if model_config.architectures == expected_config.architectures:
warnings.warn(
f"Adapter '{adapter_id}' was not trained on base model '{model_id}'. "
f"If you encounter issues, use --model-id '{adapter_config.base_model_name_or_path}' instead."
)
else:
# TODO(travis): revisit this when we support clasification heads which will not use CausalLM
raise ValueError(
f"Adapter '{adapter_id}' is not compatible with model '{model_id}'. "
f"Architectures differ: {model_config.architectures} != {expected_config.architectures}. "
f"Use --model-id '{adapter_config.base_model_name_or_path}' instead."
)
@lru_cache(maxsize=128)
def load_module_map(
model_id: str,
revision: str,
adapter_id: str,
adapter_path: Optional[str],
weight_names: Tuple[str],
trust_remote_code: bool = False,
) -> Tuple["ModuleMap", "AdapterConfig", Set[str], PreTrainedTokenizer]:
adapter_config = LoraConfig.load(adapter_path or adapter_id, None)
if not adapter_path and adapter_config.base_model_name_or_path != model_id:
check_architectures(model_id, adapter_id, adapter_config, trust_remote_code)
adapter_filenames = (
hub._weight_files_from_dir(adapter_path, extension=".safetensors")
if adapter_path
else hub._cached_weight_files(
adapter_id, revision=revision, extension=".safetensors"
)
)
# throw an error if no adapter weights are found
if not adapter_filenames:
raise FileNotFoundError(
f"No adapter weights found for adapter '{adapter_id}' and revision '{revision}'."
)
try:
adapter_tokenizer = AutoTokenizer.from_pretrained(
adapter_config.config_path,
trust_remote_code=trust_remote_code,
)
except Exception:
# Adapter does not have a tokenizer, so fallback to base model tokenizer
adapter_tokenizer = None
# load adapter weights from all shards (should have relatively small memory footprint)
adapter_weights = {}
for filename in adapter_filenames:
adapter_weights.update(load_file(filename))
# map the model weights to the relevant adapter weights (LoRA A and B matrices)
module_map, adapter_weight_names = adapter_config.map_weights_for_model(
adapter_weights, weight_names
)
return module_map, adapter_config, adapter_weight_names, adapter_tokenizer
def get_attn_weights(i, layer):
qkv = layer.self_attn.query_key_value
weights = {}
for k in ["q", "k", "v"]:
key = (i, f"{k}_proj")
value = (f"model.layers.{i}.self_attn.{k}_proj", qkv)
weights[key] = value
# also add the qkv_proj weight for the adapter
weights[(i, "qkv_proj")] = (
f"model.layers.{i}.self_attn.qkv_proj",
qkv,
)
weights[(i, "o_proj")] = (
f"model.layers.{i}.self_attn.o_proj",
layer.self_attn.o_proj,
)
return weights
def get_mlp_weights(i, layer):
weights = {}
if hasattr(layer, "mlp"):
mlp = layer.mlp
if hasattr(mlp, "gate_up_proj"):
# handle combined gate_up_proj (e.g., for some LLaMA variants)
weights.update(
{
(i, "gate_proj"): (
f"model.layers.{i}.mlp.gate_proj",
mlp.gate_up_proj,
),
(i, "up_proj"): (f"model.layers.{i}.mlp.up_proj", mlp.gate_up_proj),
}
)
else:
# handle separate gate_proj, up_proj, and down_proj (e.g., for Gemma)
if hasattr(mlp, "gate_proj"):
weights[(i, "gate_proj")] = (
f"model.layers.{i}.mlp.gate_proj",
mlp.gate_proj,
)
if hasattr(mlp, "up_proj"):
weights[(i, "up_proj")] = (f"model.layers.{i}.mlp.up_proj", mlp.up_proj)
if hasattr(mlp, "c_fc"):
weights[(i, "c_fc")] = (f"model.layers.{i}.mlp.c_fc", mlp.c_fc)
if hasattr(mlp, "c_proj"):
weights[(i, "c_proj")] = (f"model.layers.{i}.mlp.c_proj", mlp.c_proj)
if hasattr(mlp, "down_proj"):
weights[(i, "down_proj")] = (
f"model.layers.{i}.mlp.down_proj",
mlp.down_proj,
)
return weights
# build_layer_weight_lookup creates a mapping of model layers to their corresponding
# weight tensors and paths. It builds a dictionary that maps layer identifiers to tuples
# containing the weight tensor path and the actual layer object. This mapping is needed
# for the lora adapter to know which weights to update when applying the adapter.
def build_layer_weight_lookup(model):
if hasattr(model, "language_model"):
m = model.language_model.model
elif hasattr(model, "text_model"):
m = model.text_model.model
else:
m = model.model
layer_weights = {}
for i, layer in enumerate(m.layers):
attn_weights = get_attn_weights(i, layer)
mlp_weights = get_mlp_weights(i, layer)
layer_weights.update(attn_weights)
layer_weights.update(mlp_weights)
lm_head = None
if hasattr(m, "lm_head"):
lm_head = m.lm_head
elif hasattr(model, "lm_head"):
lm_head = model.lm_head
if lm_head:
layer_weights[(0, "lm_head")] = ("lm_head", lm_head)
return layer_weights
| text-generation-inference/server/text_generation_server/utils/adapter.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/utils/adapter.py",
"repo_id": "text-generation-inference",
"token_count": 4641
} |
import re
from typing import List, Optional, Tuple, Set, Union
import torch
from text_generation_server.pb import generate_pb2
from text_generation_server.pb.generate_pb2 import FinishReason, GrammarType
from text_generation_server.utils.logits_process import (
FrequencyPenaltyLogitsProcessor,
GrammarLogitProcessor,
HeterogeneousProcessorWrapper,
HeterogeneousRepetitionPenaltyLogitsProcessor,
HeterogeneousFrequencyPenaltyLogitsProcessor,
HeterogeneousTemperatureLogitsWarper,
HeterogeneousTopKLogitsWarper,
HeterogeneousTopPLogitsWarper,
HeterogeneousTypicalLogitsWarper,
HeterogeneousGrammarLogitProcessor,
static_warper,
)
from text_generation_server.utils.watermark import WatermarkLogitsProcessor
from transformers import PreTrainedTokenizerBase, RepetitionPenaltyLogitsProcessor
class NextTokenChooser:
def __init__(
self,
watermark: bool = False,
temperature: float = 1.0,
repetition_penalty: float = 1.0,
frequency_penalty: float = 0.0,
top_k: Optional[int] = None,
top_p: Optional[float] = None,
typical_p: Optional[float] = None,
do_sample: bool = False,
seed: int = 0,
device: str = "cpu",
tokenizer: Optional[PreTrainedTokenizerBase] = None,
grammar: str = "",
grammar_type: GrammarType = GrammarType.GRAMMAR_TYPE_NONE,
fsm_grammar_state: int = 0,
):
self.watermark_processor = (
WatermarkLogitsProcessor(device=device) if watermark else None
)
self.repetition_processor = (
RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty)
if repetition_penalty and repetition_penalty != 1.0
else None
)
self.frequency_processor = (
FrequencyPenaltyLogitsProcessor(penalty=frequency_penalty)
if frequency_penalty and frequency_penalty != 0.0
else None
)
self.grammar_processor = (
GrammarLogitProcessor(tokenizer, device, grammar, grammar_type)
if grammar != ""
else None
)
self.tokenizer = tokenizer
has_warpers = (
(temperature is not None and temperature != 1.0)
or (top_k is not None and top_k != 0)
or (top_p is not None and top_p < 1.0)
or (typical_p is not None and typical_p < 1.0)
)
if has_warpers:
self.static_warper = static_warper(
temperature=temperature, top_k=top_k, top_p=top_p, typical_p=typical_p
)
else:
self.static_warper = None
sampling = do_sample or has_warpers
self.choice = Sampling(seed, device) if sampling else Greedy()
self.fsm_grammar_state = fsm_grammar_state
self.grammar = grammar
def __call__(self, input_ids, scores):
if self.watermark_processor is not None:
scores = self.watermark_processor(input_ids, scores)
if self.repetition_processor is not None:
scores = self.repetition_processor(input_ids, scores)
if self.frequency_processor is not None:
scores = self.frequency_processor(input_ids, scores)
if self.grammar_processor is not None:
scores = self.grammar_processor(scores, self.fsm_grammar_state)
if self.static_warper is None:
next_logprob = torch.log_softmax(scores, -1)
else:
scores, next_logprob = self.static_warper(scores)
next_id = self.choice(scores[-1]).view(1, 1)
return next_id, next_logprob
def advance_grammar(self, next_id: int):
if self.grammar_processor is not None:
self.fsm_grammar_state = self.grammar_processor.advance(
next_id, self.fsm_grammar_state
)
return self
@classmethod
def from_pb(
cls,
pb: generate_pb2.NextTokenChooserParameters,
device: torch.device,
tokenizer: PreTrainedTokenizerBase,
) -> "NextTokenChooser":
return NextTokenChooser(
watermark=pb.watermark,
temperature=pb.temperature,
repetition_penalty=pb.repetition_penalty,
frequency_penalty=pb.frequency_penalty,
top_k=pb.top_k,
top_p=pb.top_p,
typical_p=pb.typical_p,
do_sample=pb.do_sample,
seed=pb.seed,
device=device,
tokenizer=tokenizer,
grammar=pb.grammar,
grammar_type=pb.grammar_type,
)
class StopSequenceCriteria:
def __init__(self, stop_sequence: str):
stop_sequence = re.escape(stop_sequence)
self.regex = re.compile(f"{stop_sequence}$")
def __call__(self, output: str) -> bool:
if self.regex.findall(output):
return True
return False
class StoppingCriteria:
def __init__(
self,
eos_token_ids: Optional[Union[Set[int], int]],
stop_sequence_criterias: List[StopSequenceCriteria],
max_new_tokens: int = 20,
ignore_eos_token: bool = False,
):
if eos_token_ids is None:
eos_token_ids = set()
elif isinstance(eos_token_ids, int):
eos_token_ids = set([eos_token_ids])
elif isinstance(eos_token_ids, set):
eos_token_ids = eos_token_ids
else:
raise RuntimeError(
f"eos_token_ids is of invalid type {type(eos_token_ids)}, expected int, None or set[int]"
)
self.eos_token_ids = eos_token_ids
self.stop_sequence_criterias = stop_sequence_criterias
self.max_new_tokens = max_new_tokens
self.current_tokens = 0
self.current_output = ""
self.ignore_eos_token = ignore_eos_token
def __call__(self, last_token: int, last_output: str) -> Tuple[bool, Optional[str]]:
self.current_tokens += 1
if self.current_tokens >= self.max_new_tokens:
return True, FinishReason.FINISH_REASON_LENGTH
if isinstance(last_token, torch.Tensor):
last_token = last_token.item()
if not self.ignore_eos_token and last_token in self.eos_token_ids:
return True, FinishReason.FINISH_REASON_EOS_TOKEN
if self.stop_sequence_criterias:
self.current_output += last_output
# There is no need to keep an output that is too long
if len(self.current_output) > 300:
# Slice to -200 to avoid doing it all the time
self.current_output = self.current_output[-200:]
for stop_sequence_criteria in self.stop_sequence_criterias:
if stop_sequence_criteria(self.current_output):
return True, FinishReason.FINISH_REASON_STOP_SEQUENCE
return False, None
@classmethod
def from_pb(
cls,
pb: generate_pb2.StoppingCriteriaParameters,
tokenizer: PreTrainedTokenizerBase,
) -> "StoppingCriteria":
stop_sequence_criterias = [
StopSequenceCriteria(sequence) for sequence in pb.stop_sequences
]
# TODO Hack because eos_token_id cannot be what we want.
eos_token_id = getattr(tokenizer, "_eos_token_ids", tokenizer.eos_token_id)
return StoppingCriteria(
eos_token_id,
stop_sequence_criterias,
pb.max_new_tokens,
pb.ignore_eos_token,
)
def create_n_gram_speculation(
input_ids: torch.Tensor,
next_ids: torch.Tensor,
accepted_ids: torch.Tensor,
speculate: int,
verbose: bool,
):
# Very trivial approach, find first match in the string.
# This is much less refined than actual n-gram but seems to work
# relatively OK in grounded mode and is by far much faster with
# much less worst case complexity as everything happens on device.
B = accepted_ids.shape[0]
device = input_ids.device
seeds = next_ids[accepted_ids.cumsum(dim=-1) - 1]
indices = (input_ids == seeds.unsqueeze(-1)).max(dim=1).indices + 1
all_indices = indices.unsqueeze(-1).expand(B, speculate) + torch.arange(
speculate, device=device
)
all_indices = torch.clamp(all_indices, max=input_ids.shape[1] - 1)
speculative_ids = input_ids.gather(dim=-1, index=all_indices)
return speculative_ids
class HeterogeneousNextTokenChooser:
def __init__(
self,
dtype: torch.dtype,
device: torch.device,
watermark: List[bool],
temperature: List[float],
repetition_penalty: List[float],
frequency_penalty: List[float],
top_k: List[int],
top_p: List[float],
typical_p: List[float],
do_sample: List[bool],
seeds: List[int],
tokenizer: PreTrainedTokenizerBase,
grammars: List[str],
grammar_types: List[int],
fsm_grammar_states=List[int],
):
warpers = []
self.watermark_processor = (
HeterogeneousProcessorWrapper(
{
i: WatermarkLogitsProcessor(device=device)
for i, do_watermark in enumerate(watermark)
if do_watermark
}
)
if any(watermark)
else None
)
self.repetition_processor = (
HeterogeneousRepetitionPenaltyLogitsProcessor(
repetition_penalty, dtype, device
)
if any([x != 1.0 for x in repetition_penalty])
else None
)
self.frequency_processor = (
HeterogeneousFrequencyPenaltyLogitsProcessor(
frequency_penalty, dtype, device
)
if any([x != 0.0 for x in frequency_penalty])
else None
)
self.grammar_processor = (
HeterogeneousGrammarLogitProcessor(
tokenizer, device, grammars, grammar_types
)
if any([grammar != "" for grammar in grammars])
else None
)
if any(x != 1.0 for x in temperature):
do_sample = [
sample or x != 1.0 for x, sample in zip(temperature, do_sample)
]
warpers.append(
HeterogeneousTemperatureLogitsWarper(temperature, dtype, device)
)
if any(x != 0 for x in top_k):
do_sample = [sample or x != 0 for x, sample in zip(top_k, do_sample)]
warpers.append(HeterogeneousTopKLogitsWarper(top_k, device))
if any(x < 1.0 for x in top_p):
do_sample = [sample or x < 1.0 for x, sample in zip(top_p, do_sample)]
warpers.append(HeterogeneousTopPLogitsWarper(top_p, dtype, device))
if any(x < 1.0 for x in typical_p):
do_sample = [sample or x < 1.0 for x, sample in zip(typical_p, do_sample)]
warpers.append(HeterogeneousTypicalLogitsWarper(typical_p, dtype, device))
self.warpers = warpers
if any(do_sample):
self.choice = HeterogeneousSampling(do_sample, seeds, device)
else:
self.choice = Greedy()
self.seeds = seeds
self.do_sample = do_sample
self.dtype = dtype
self.device = device
self.tokenizer = tokenizer
self.fsm_grammar_states = fsm_grammar_states
self.grammars = grammars
self.grammar_types = grammar_types
def __call__(
self,
input_ids: torch.Tensor,
scores: torch.Tensor,
speculate: int,
speculated_ids: Optional[torch.Tensor] = None,
speculative_scores: Optional[torch.Tensor] = None,
verbose=False,
):
if speculated_ids is not None:
B = scores.shape[0] // (speculated_ids.shape[1] + 1)
S = speculated_ids.shape[1] + 1
scores = scores.view(B, S, -1)
else:
B = scores.shape[0]
S = 1
scores = scores.view(B, S, -1)
next_ids = torch.zeros((B, S), device=scores.device, dtype=torch.long)
for j in range(S):
_scores = scores[:, j]
if self.watermark_processor is not None:
_scores = self.watermark_processor(input_ids, _scores)
if self.repetition_processor is not None:
_scores = self.repetition_processor(input_ids, _scores)
if self.frequency_processor is not None:
_scores = self.frequency_processor(input_ids, _scores)
if self.grammar_processor is not None:
_scores = self.grammar_processor(_scores, self.fsm_grammar_states)
for warper in self.warpers:
_scores = warper(input_ids, _scores)
_next_ids = self.choice(_scores)
scores[:, j] = _scores
next_ids[:, j] = _next_ids
next_ids = next_ids.view(B * S)
allscores = scores.view(B * S, -1)
alllogprobs = torch.log_softmax(allscores, -1)
if speculated_ids is not None:
accepted_ids = []
B = next_ids.shape[0] // (speculated_ids.shape[1] + 1)
S = speculated_ids.shape[1] + 1
indices = []
for i in range(B):
_next_ids = next_ids[i * S : (i + 1) * S]
_speculated_ids = speculated_ids[i]
validate_speculative = _next_ids[:-1] == _speculated_ids
index = i * S
accepted = 1
# First is always valid
indices.append(index)
for valid in validate_speculative.tolist():
if valid:
index += 1
accepted += 1
indices.append(index)
else:
break
accepted_ids.append(accepted)
accepted_ids = torch.tensor(
accepted_ids, device=input_ids.device, dtype=input_ids.dtype
)
next_ids = next_ids[indices]
logprobs = alllogprobs[indices]
indices = torch.arange(B, device=input_ids.device) * S
if speculative_scores is not None:
speculative_scores = speculative_scores[indices + accepted_ids - 1]
else:
accepted_ids = torch.ones_like(next_ids)
logprobs = alllogprobs
next_logprobs = torch.gather(logprobs, 1, next_ids.view(-1, 1)).view(-1)
if speculate > 0:
if speculative_scores is not None:
# Medusa provided some scores
speculative_ids = Greedy()(speculative_scores)
else:
# n-gram
speculative_ids = create_n_gram_speculation(
input_ids, next_ids, accepted_ids, speculate, verbose
)
else:
speculative_ids = None
return next_ids, next_logprobs, alllogprobs, accepted_ids, speculative_ids
def advance_grammar(self, next_ids: List[int]):
if self.grammar_processor is not None:
other_new_states = self.grammar_processor.advance_batch(
next_ids, self.fsm_grammar_states
)
self.fsm_grammar_states = other_new_states
return self
def advance_grammar_single(self, grammar_state_index: int, next_id: int):
if self.grammar_processor is not None:
self.fsm_grammar_states[grammar_state_index] = (
self.grammar_processor.advance_at_index(
next_id,
self.fsm_grammar_states[grammar_state_index],
grammar_state_index,
)
)
return self
def filter(self, indices):
if self.watermark_processor is not None:
self.watermark_processor = self.watermark_processor.filter(indices)
if self.repetition_processor is not None:
self.repetition_processor = self.repetition_processor.filter(indices)
if self.frequency_processor is not None:
self.frequency_processor = self.frequency_processor.filter(indices)
if self.grammar_processor is not None:
self.grammar_processor = self.grammar_processor.filter(indices)
filtered_warpers = []
for warper in self.warpers:
filtered_warper = warper.filter(indices)
if filtered_warper is not None:
filtered_warpers.append(filtered_warper)
self.warpers = filtered_warpers
self.seeds = [self.seeds[i] for i in indices]
self.do_sample = [self.do_sample[i] for i in indices]
new_grammars = []
new_fsm_grammar_states = []
new_grammar_types = []
for i in indices:
new_grammars.append(self.grammars[i])
new_fsm_grammar_states.append(self.fsm_grammar_states[i])
new_grammar_types.append(self.grammar_types[i])
self.grammars = new_grammars
self.fsm_grammar_states = new_fsm_grammar_states
self.grammar_types = new_grammar_types
if any(self.do_sample):
self.choice.filter(indices)
else:
self.choice = Greedy()
return self
@classmethod
def from_pb(
cls,
pb: List[generate_pb2.NextTokenChooserParameters],
dtype: torch.dtype,
device: torch.device,
tokenizer: PreTrainedTokenizerBase,
fsm_grammar_states: Optional[List[int]] = None,
) -> "HeterogeneousNextTokenChooser":
return HeterogeneousNextTokenChooser(
watermark=[pb_.watermark for pb_ in pb],
temperature=[pb_.temperature for pb_ in pb],
repetition_penalty=[pb_.repetition_penalty for pb_ in pb],
frequency_penalty=[pb_.frequency_penalty for pb_ in pb],
top_k=[pb_.top_k for pb_ in pb],
top_p=[pb_.top_p for pb_ in pb],
typical_p=[pb_.typical_p for pb_ in pb],
do_sample=[pb_.do_sample for pb_ in pb],
seeds=[pb_.seed for pb_ in pb],
device=device,
dtype=dtype,
tokenizer=tokenizer,
grammars=[pb_.grammar for pb_ in pb],
grammar_types=[pb_.grammar_type for pb_ in pb],
fsm_grammar_states=(
fsm_grammar_states if fsm_grammar_states else [0] * len(pb)
),
)
class Sampling:
def __init__(self, seed: int, device: str = "cpu"):
self.generator = torch.Generator(device)
self.generator.manual_seed(seed)
self.seed = seed
def __call__(self, logits):
probs = torch.nn.functional.softmax(logits, -1)
# Avoid GPU<->CPU sync done by torch multinomial
# See: https://github.com/pytorch/pytorch/blob/925a3788ec5c06db62ca732a0e9425a26a00916f/aten/src/ATen/native/Distributions.cpp#L631-L637
q = torch.empty_like(probs).exponential_(1, generator=self.generator)
return probs.div_(q).argmax()
class Greedy:
def __call__(self, logits):
return logits.argmax(dim=-1)
class HeterogeneousSampling:
r"""
Mixed greedy and probabilistic sampling. Compute both and pick the right one for each sample.
"""
def __init__(self, do_sample: List[bool], seeds: List[int], device: torch.device):
self.seeds = seeds
self.greedy_indices = []
self.sampling_mapping = {}
for i, (sample, seed) in enumerate(zip(do_sample, seeds)):
if sample:
self.sampling_mapping[i] = Sampling(seed, device)
else:
self.greedy_indices.append(i)
self.greedy = Greedy()
def __call__(self, logits):
out = torch.empty(logits.shape[0], dtype=torch.int64, device=logits.device)
if self.greedy_indices:
# Computing for all indices is faster than slicing
torch.argmax(logits, -1, out=out)
for i, sampling in self.sampling_mapping.items():
out[i] = sampling(logits[i])
return out
def filter(self, indices):
new_greedy_indices = []
new_sampling_mapping = {}
for i, idx in enumerate(indices):
if idx in self.sampling_mapping:
new_sampling_mapping[i] = self.sampling_mapping[idx]
else:
new_greedy_indices.append(i)
self.greedy_indices = new_greedy_indices
self.sampling_mapping = new_sampling_mapping
return self
def batch_top_tokens(
top_n_tokens: List[int],
top_n_tokens_tensor: torch.Tensor,
logprobs: torch.Tensor,
accepted_ids: torch.Tensor,
) -> Tuple[List[List[List[int]]], List[List[List[float]]]]:
"""Find the top n most likely tokens for a batch of generations.
When multiple tokens have equal probabilities and they don't all fit, the
remaining tokens are also returned.
"""
max_top_n = max(top_n_tokens)
# Early exit when top_n_tokens is not used
if max_top_n == 0:
return [[[]]] * len(top_n_tokens), [[[]]] * len(top_n_tokens)
batch_size = accepted_ids.shape[0]
speculate_size = logprobs.shape[0] // batch_size
top_n_tokens_tensor = top_n_tokens_tensor.repeat_interleave(speculate_size)
# Ensure top_n doesn't exceed vocab size
top_n_tokens = [
min(tok, logprobs.size(-1))
for tok in top_n_tokens
for _ in range(speculate_size)
]
# Parallel kthvalue adapted from https://discuss.pytorch.org/t/how-to-efficiently-get-the-k-th-largest-values-in-parallel/160529/2
# Sorted topk is faster than torch.sort() since we only need a small subset
sorted_top_k = torch.topk(logprobs, k=max_top_n, dim=-1, sorted=True).values
nth_highest = torch.gather(
sorted_top_k, 1, (top_n_tokens_tensor - 1).clip(min=0).unsqueeze(1)
)
nth_highest[nth_highest == -float("inf")] = torch.finfo(logprobs.dtype).min
# Find the new "fuzzy" top n values
top_n_indices = (logprobs >= nth_highest).nonzero()
_, top_n_ishes = torch.unique_consecutive(top_n_indices[:, 0], return_counts=True)
k = 1 if top_n_ishes.numel() == 0 else top_n_ishes.max()
# Take a new topk for these new max n values
top_k = torch.topk(logprobs, k=k, dim=1, sorted=True)
top_n_ishes = top_n_ishes.tolist()
top_indices = top_k.indices.tolist()
top_values = top_k.values.tolist()
batch_top_token_ids = []
batch_top_token_logprobs = []
accepted_ids_list = accepted_ids.tolist()
for i, n_accepted_ids in enumerate(accepted_ids_list):
start = speculate_size * i
stop = speculate_size * (i + 1)
_top_indices = top_indices[start:stop]
_top_values = top_values[start:stop]
_top_n_ishes = top_n_ishes[start:stop]
_top_n_tokens = top_n_tokens[start:stop]
_top_indices = _top_indices[:n_accepted_ids]
_top_values = _top_values[:n_accepted_ids]
_top_n_ishes = _top_n_ishes[:n_accepted_ids]
_top_n_tokens = _top_n_tokens[:n_accepted_ids]
row_top_token_ids = []
row_top_token_logprobs = []
for idxs, vals, n, req_n in zip(
_top_indices, _top_values, _top_n_ishes, _top_n_tokens
):
indices = idxs[:n] if req_n > 0 else []
values = vals[:n] if req_n > 0 else []
row_top_token_ids.append(indices)
row_top_token_logprobs.append(values)
batch_top_token_ids.append(row_top_token_ids)
batch_top_token_logprobs.append(row_top_token_logprobs)
return batch_top_token_ids, batch_top_token_logprobs
| text-generation-inference/server/text_generation_server/utils/tokens.py/0 | {
"file_path": "text-generation-inference/server/text_generation_server/utils/tokens.py",
"repo_id": "text-generation-inference",
"token_count": 11317
} |
import {
bpeDecoder,
byteFallbackDecoder,
ctcDecoder,
fuseDecoder,
metaspaceDecoder,
replaceDecoder,
sequenceDecoder,
stripDecoder,
wordPieceDecoder,
} from '../../'
describe('wordPieceDecoder', () => {
it('accepts `undefined` as first parameter', () => {
expect(wordPieceDecoder(undefined)).toBeDefined()
})
it('accepts `undefined` as second parameter', () => {
expect(wordPieceDecoder('test', undefined)).toBeDefined()
})
it('can decode arrays of strings', () => {
expect(wordPieceDecoder().decode(['Hel', '##lo', 'there', 'my', 'fr', '##iend'])).toEqual('Hello there my friend')
})
})
describe('byteFallbackDecoder', () => {
it('accepts `undefined` as first parameter', () => {
expect(byteFallbackDecoder()).toBeDefined()
})
it('can decode arrays of strings', () => {
expect(byteFallbackDecoder().decode(['Hel', 'lo'])).toEqual('Hello')
expect(byteFallbackDecoder().decode(['<0x61>'])).toEqual('a')
expect(byteFallbackDecoder().decode(['<0x61>'])).toEqual('a')
expect(byteFallbackDecoder().decode(['My', ' na', 'me'])).toEqual('My name')
expect(byteFallbackDecoder().decode(['<0x61>'])).toEqual('a')
expect(byteFallbackDecoder().decode(['<0xE5>'])).toEqual('�')
expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>'])).toEqual('��')
expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>', '<0xab>'])).toEqual('叫')
expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>', 'a'])).toEqual('��a')
expect(byteFallbackDecoder().decode(['<0xE5>', '<0x8f>', '<0xab>', 'a'])).toEqual('叫a')
})
})
describe('replaceDecoder', () => {
it('can decode arrays of strings', () => {
expect(replaceDecoder('_', ' ').decode(['Hello', '_Hello'])).toEqual('Hello Hello')
})
})
describe('fuseDecoder', () => {
it('accepts `undefined` as first parameter', () => {
expect(fuseDecoder()).toBeDefined()
})
it('can decode arrays of strings', () => {
expect(fuseDecoder().decode(['Hel', 'lo'])).toEqual('Hello')
})
})
describe('stripDecoder', () => {
it('accepts `undefined` as first parameter', () => {
expect(stripDecoder('_', 0, 0)).toBeDefined()
})
it('can decode arrays of strings', () => {
expect(stripDecoder('_', 1, 0).decode(['_Hel', 'lo', '__there'])).toEqual('Hello_there')
})
})
describe('metaspaceDecoder', () => {
it('accepts `undefined` as first parameter', () => {
expect(metaspaceDecoder(undefined)).toBeDefined()
})
it('accepts `undefined` as second parameter', () => {
expect(metaspaceDecoder('t', undefined)).toBeDefined()
})
it('works', () => {
expect(metaspaceDecoder().decode(['▁Hello'])).toEqual('Hello')
})
})
describe('bpeDecoder', () => {
it('accepts `undefined` as parameter', () => {
expect(bpeDecoder(undefined)).toBeDefined()
})
})
describe('ctcDecoder', () => {
it('accepts `undefined` as parameter', () => {
expect(ctcDecoder(undefined)).toBeDefined()
})
it('encodes correctly', () => {
expect(ctcDecoder().decode(['<pad>', 'h', 'h', 'e', 'e', 'l', 'l', '<pad>', 'l', 'l', 'o'])).toEqual('hello')
})
})
describe('sequenceDecoder', () => {
it('accepts `empty list` as parameter', () => {
expect(sequenceDecoder([])).toBeDefined()
})
it('encodes correctly', () => {
expect(
sequenceDecoder([ctcDecoder(), metaspaceDecoder()]).decode(['▁', '▁', 'H', 'H', 'i', 'i', '▁', 'y', 'o', 'u']),
).toEqual('Hi you')
})
})
| tokenizers/bindings/node/lib/bindings/decoders.test.ts/0 | {
"file_path": "tokenizers/bindings/node/lib/bindings/decoders.test.ts",
"repo_id": "tokenizers",
"token_count": 1393
} |
use crate::models::Model;
use napi_derive::napi;
use std::sync::{Arc, RwLock};
use tokenizers as tk;
use tokenizers::models::TrainerWrapper;
#[napi]
pub struct Trainer {
trainer: Option<Arc<RwLock<TrainerWrapper>>>,
}
impl From<TrainerWrapper> for Trainer {
fn from(trainer: TrainerWrapper) -> Self {
Self {
trainer: Some(Arc::new(RwLock::new(trainer))),
}
}
}
impl tk::Trainer for Trainer {
type Model = Model;
fn should_show_progress(&self) -> bool {
self
.trainer
.as_ref()
.expect("Uninitialized Trainer")
.read()
.unwrap()
.should_show_progress()
}
fn train(&self, model: &mut Self::Model) -> tk::Result<Vec<tk::AddedToken>> {
let special_tokens = self
.trainer
.as_ref()
.ok_or("Uninitialized Trainer")?
.read()
.unwrap()
.train(
&mut model
.model
.as_ref()
.ok_or("Uninitialized Model")?
.write()
.unwrap(),
)?;
Ok(special_tokens)
}
fn feed<I, S, F>(&mut self, iterator: I, process: F) -> tk::Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> tk::Result<Vec<String>> + Sync,
{
self
.trainer
.as_ref()
.ok_or("Uninitialized Trainer")?
.write()
.unwrap()
.feed(iterator, process)
}
}
| tokenizers/bindings/node/src/trainers.rs/0 | {
"file_path": "tokenizers/bindings/node/src/trainers.rs",
"repo_id": "tokenizers",
"token_count": 641
} |
import argparse
import glob
from tokenizers import BertWordPieceTokenizer
parser = argparse.ArgumentParser()
parser.add_argument(
"--files",
default=None,
metavar="path",
type=str,
required=True,
help="The files to use as training; accept '**/*.txt' type of patterns \
if enclosed in quotes",
)
parser.add_argument(
"--out",
default="./",
type=str,
help="Path to the output directory, where the files will be saved",
)
parser.add_argument("--name", default="bert-wordpiece", type=str, help="The name of the output vocab files")
args = parser.parse_args()
files = glob.glob(args.files)
if not files:
print(f"File does not exist: {args.files}")
exit(1)
# Initialize an empty tokenizer
tokenizer = BertWordPieceTokenizer(
clean_text=True,
handle_chinese_chars=True,
strip_accents=True,
lowercase=True,
)
# And then train
tokenizer.train(
files,
vocab_size=10000,
min_frequency=2,
show_progress=True,
special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"],
limit_alphabet=1000,
wordpieces_prefix="##",
)
# Save the files
tokenizer.save_model(args.out, args.name)
| tokenizers/bindings/python/examples/train_bert_wordpiece.py/0 | {
"file_path": "tokenizers/bindings/python/examples/train_bert_wordpiece.py",
"repo_id": "tokenizers",
"token_count": 472
} |
# Generated content DO NOT EDIT
class Model:
"""
Base class for all models
The model represents the actual tokenization algorithm. This is the part that
will contain and manage the learned vocabulary.
This class cannot be constructed directly. Please use one of the concrete models.
"""
def get_trainer(self):
"""
Get the associated :class:`~tokenizers.trainers.Trainer`
Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this
:class:`~tokenizers.models.Model`.
Returns:
:class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model
"""
pass
def id_to_token(self, id):
"""
Get the token associated to an ID
Args:
id (:obj:`int`):
An ID to convert to a token
Returns:
:obj:`str`: The token associated to the ID
"""
pass
def save(self, folder, prefix):
"""
Save the current model
Save the current model in the given folder, using the given prefix for the various
files that will get created.
Any file with the same name that already exists in this folder will be overwritten.
Args:
folder (:obj:`str`):
The path to the target folder in which to save the various files
prefix (:obj:`str`, `optional`):
An optional prefix, used to prefix each file name
Returns:
:obj:`List[str]`: The list of saved files
"""
pass
def token_to_id(self, tokens):
"""
Get the ID associated to a token
Args:
token (:obj:`str`):
A token to convert to an ID
Returns:
:obj:`int`: The ID associated to the token
"""
pass
def tokenize(self, sequence):
"""
Tokenize a sequence
Args:
sequence (:obj:`str`):
A sequence to tokenize
Returns:
A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens
"""
pass
class BPE(Model):
"""
An implementation of the BPE (Byte-Pair Encoding) algorithm
Args:
vocab (:obj:`Dict[str, int]`, `optional`):
A dictionary of string keys and their ids :obj:`{"am": 0,...}`
merges (:obj:`List[Tuple[str, str]]`, `optional`):
A list of pairs of tokens (:obj:`Tuple[str, str]`) :obj:`[("a", "b"),...]`
cache_capacity (:obj:`int`, `optional`):
The number of words that the BPE cache can contain. The cache allows
to speed-up the process by keeping the result of the merge operations
for a number of words.
dropout (:obj:`float`, `optional`):
A float between 0 and 1 that represents the BPE dropout to use.
unk_token (:obj:`str`, `optional`):
The unknown token to be used by the model.
continuing_subword_prefix (:obj:`str`, `optional`):
The prefix to attach to subword units that don't represent a beginning of word.
end_of_word_suffix (:obj:`str`, `optional`):
The suffix to attach to subword units that represent an end of word.
fuse_unk (:obj:`bool`, `optional`):
Whether to fuse any subsequent unknown tokens into a single one
byte_fallback (:obj:`bool`, `optional`):
Whether to use spm byte-fallback trick (defaults to False)
ignore_merges (:obj:`bool`, `optional`):
Whether or not to match tokens with the vocab before using merges.
"""
def __init__(
self,
vocab=None,
merges=None,
cache_capacity=None,
dropout=None,
unk_token=None,
continuing_subword_prefix=None,
end_of_word_suffix=None,
fuse_unk=None,
byte_fallback=False,
ignore_merges=False,
):
pass
@staticmethod
def from_file(cls, vocab, merge, **kwargs):
"""
Instantiate a BPE model from the given files.
This method is roughly equivalent to doing::
vocab, merges = BPE.read_file(vocab_filename, merges_filename)
bpe = BPE(vocab, merges)
If you don't need to keep the :obj:`vocab, merges` values lying around,
this method is more optimized than manually calling
:meth:`~tokenizers.models.BPE.read_file` to initialize a :class:`~tokenizers.models.BPE`
Args:
vocab (:obj:`str`):
The path to a :obj:`vocab.json` file
merges (:obj:`str`):
The path to a :obj:`merges.txt` file
Returns:
:class:`~tokenizers.models.BPE`: An instance of BPE loaded from these files
"""
pass
def get_trainer(self):
"""
Get the associated :class:`~tokenizers.trainers.Trainer`
Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this
:class:`~tokenizers.models.Model`.
Returns:
:class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model
"""
pass
def id_to_token(self, id):
"""
Get the token associated to an ID
Args:
id (:obj:`int`):
An ID to convert to a token
Returns:
:obj:`str`: The token associated to the ID
"""
pass
@staticmethod
def read_file(self, vocab, merges):
"""
Read a :obj:`vocab.json` and a :obj:`merges.txt` files
This method provides a way to read and parse the content of these files,
returning the relevant data structures. If you want to instantiate some BPE models
from memory, this method gives you the expected input from the standard files.
Args:
vocab (:obj:`str`):
The path to a :obj:`vocab.json` file
merges (:obj:`str`):
The path to a :obj:`merges.txt` file
Returns:
A :obj:`Tuple` with the vocab and the merges:
The vocabulary and merges loaded into memory
"""
pass
def save(self, folder, prefix):
"""
Save the current model
Save the current model in the given folder, using the given prefix for the various
files that will get created.
Any file with the same name that already exists in this folder will be overwritten.
Args:
folder (:obj:`str`):
The path to the target folder in which to save the various files
prefix (:obj:`str`, `optional`):
An optional prefix, used to prefix each file name
Returns:
:obj:`List[str]`: The list of saved files
"""
pass
def token_to_id(self, tokens):
"""
Get the ID associated to a token
Args:
token (:obj:`str`):
A token to convert to an ID
Returns:
:obj:`int`: The ID associated to the token
"""
pass
def tokenize(self, sequence):
"""
Tokenize a sequence
Args:
sequence (:obj:`str`):
A sequence to tokenize
Returns:
A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens
"""
pass
class Unigram(Model):
"""
An implementation of the Unigram algorithm
Args:
vocab (:obj:`List[Tuple[str, float]]`, `optional`, `optional`):
A list of vocabulary items and their relative score [("am", -0.2442),...]
"""
def __init__(self, vocab, unk_id, byte_fallback):
pass
def get_trainer(self):
"""
Get the associated :class:`~tokenizers.trainers.Trainer`
Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this
:class:`~tokenizers.models.Model`.
Returns:
:class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model
"""
pass
def id_to_token(self, id):
"""
Get the token associated to an ID
Args:
id (:obj:`int`):
An ID to convert to a token
Returns:
:obj:`str`: The token associated to the ID
"""
pass
def save(self, folder, prefix):
"""
Save the current model
Save the current model in the given folder, using the given prefix for the various
files that will get created.
Any file with the same name that already exists in this folder will be overwritten.
Args:
folder (:obj:`str`):
The path to the target folder in which to save the various files
prefix (:obj:`str`, `optional`):
An optional prefix, used to prefix each file name
Returns:
:obj:`List[str]`: The list of saved files
"""
pass
def token_to_id(self, tokens):
"""
Get the ID associated to a token
Args:
token (:obj:`str`):
A token to convert to an ID
Returns:
:obj:`int`: The ID associated to the token
"""
pass
def tokenize(self, sequence):
"""
Tokenize a sequence
Args:
sequence (:obj:`str`):
A sequence to tokenize
Returns:
A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens
"""
pass
class WordLevel(Model):
"""
An implementation of the WordLevel algorithm
Most simple tokenizer model based on mapping tokens to their corresponding id.
Args:
vocab (:obj:`str`, `optional`):
A dictionary of string keys and their ids :obj:`{"am": 0,...}`
unk_token (:obj:`str`, `optional`):
The unknown token to be used by the model.
"""
def __init__(self, vocab, unk_token):
pass
@staticmethod
def from_file(vocab, unk_token):
"""
Instantiate a WordLevel model from the given file
This method is roughly equivalent to doing::
vocab = WordLevel.read_file(vocab_filename)
wordlevel = WordLevel(vocab)
If you don't need to keep the :obj:`vocab` values lying around, this method is
more optimized than manually calling :meth:`~tokenizers.models.WordLevel.read_file` to
initialize a :class:`~tokenizers.models.WordLevel`
Args:
vocab (:obj:`str`):
The path to a :obj:`vocab.json` file
Returns:
:class:`~tokenizers.models.WordLevel`: An instance of WordLevel loaded from file
"""
pass
def get_trainer(self):
"""
Get the associated :class:`~tokenizers.trainers.Trainer`
Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this
:class:`~tokenizers.models.Model`.
Returns:
:class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model
"""
pass
def id_to_token(self, id):
"""
Get the token associated to an ID
Args:
id (:obj:`int`):
An ID to convert to a token
Returns:
:obj:`str`: The token associated to the ID
"""
pass
@staticmethod
def read_file(vocab):
"""
Read a :obj:`vocab.json`
This method provides a way to read and parse the content of a vocabulary file,
returning the relevant data structures. If you want to instantiate some WordLevel models
from memory, this method gives you the expected input from the standard files.
Args:
vocab (:obj:`str`):
The path to a :obj:`vocab.json` file
Returns:
:obj:`Dict[str, int]`: The vocabulary as a :obj:`dict`
"""
pass
def save(self, folder, prefix):
"""
Save the current model
Save the current model in the given folder, using the given prefix for the various
files that will get created.
Any file with the same name that already exists in this folder will be overwritten.
Args:
folder (:obj:`str`):
The path to the target folder in which to save the various files
prefix (:obj:`str`, `optional`):
An optional prefix, used to prefix each file name
Returns:
:obj:`List[str]`: The list of saved files
"""
pass
def token_to_id(self, tokens):
"""
Get the ID associated to a token
Args:
token (:obj:`str`):
A token to convert to an ID
Returns:
:obj:`int`: The ID associated to the token
"""
pass
def tokenize(self, sequence):
"""
Tokenize a sequence
Args:
sequence (:obj:`str`):
A sequence to tokenize
Returns:
A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens
"""
pass
class WordPiece(Model):
"""
An implementation of the WordPiece algorithm
Args:
vocab (:obj:`Dict[str, int]`, `optional`):
A dictionary of string keys and their ids :obj:`{"am": 0,...}`
unk_token (:obj:`str`, `optional`):
The unknown token to be used by the model.
max_input_chars_per_word (:obj:`int`, `optional`):
The maximum number of characters to authorize in a single word.
"""
def __init__(self, vocab, unk_token, max_input_chars_per_word):
pass
@staticmethod
def from_file(vocab, **kwargs):
"""
Instantiate a WordPiece model from the given file
This method is roughly equivalent to doing::
vocab = WordPiece.read_file(vocab_filename)
wordpiece = WordPiece(vocab)
If you don't need to keep the :obj:`vocab` values lying around, this method is
more optimized than manually calling :meth:`~tokenizers.models.WordPiece.read_file` to
initialize a :class:`~tokenizers.models.WordPiece`
Args:
vocab (:obj:`str`):
The path to a :obj:`vocab.txt` file
Returns:
:class:`~tokenizers.models.WordPiece`: An instance of WordPiece loaded from file
"""
pass
def get_trainer(self):
"""
Get the associated :class:`~tokenizers.trainers.Trainer`
Retrieve the :class:`~tokenizers.trainers.Trainer` associated to this
:class:`~tokenizers.models.Model`.
Returns:
:class:`~tokenizers.trainers.Trainer`: The Trainer used to train this model
"""
pass
def id_to_token(self, id):
"""
Get the token associated to an ID
Args:
id (:obj:`int`):
An ID to convert to a token
Returns:
:obj:`str`: The token associated to the ID
"""
pass
@staticmethod
def read_file(vocab):
"""
Read a :obj:`vocab.txt` file
This method provides a way to read and parse the content of a standard `vocab.txt`
file as used by the WordPiece Model, returning the relevant data structures. If you
want to instantiate some WordPiece models from memory, this method gives you the
expected input from the standard files.
Args:
vocab (:obj:`str`):
The path to a :obj:`vocab.txt` file
Returns:
:obj:`Dict[str, int]`: The vocabulary as a :obj:`dict`
"""
pass
def save(self, folder, prefix):
"""
Save the current model
Save the current model in the given folder, using the given prefix for the various
files that will get created.
Any file with the same name that already exists in this folder will be overwritten.
Args:
folder (:obj:`str`):
The path to the target folder in which to save the various files
prefix (:obj:`str`, `optional`):
An optional prefix, used to prefix each file name
Returns:
:obj:`List[str]`: The list of saved files
"""
pass
def token_to_id(self, tokens):
"""
Get the ID associated to a token
Args:
token (:obj:`str`):
A token to convert to an ID
Returns:
:obj:`int`: The ID associated to the token
"""
pass
def tokenize(self, sequence):
"""
Tokenize a sequence
Args:
sequence (:obj:`str`):
A sequence to tokenize
Returns:
A :obj:`List` of :class:`~tokenizers.Token`: The generated tokens
"""
pass
| tokenizers/bindings/python/py_src/tokenizers/models/__init__.pyi/0 | {
"file_path": "tokenizers/bindings/python/py_src/tokenizers/models/__init__.pyi",
"repo_id": "tokenizers",
"token_count": 7626
} |
import tokenizers
from argparse import ArgumentParser
import sentencepiece as spm
from collections import Counter
import json
import os
import datetime
try:
from termcolor import colored
has_color = True
except Exception:
has_color = False
def main():
parser = ArgumentParser("SentencePiece parity checker")
parser.add_argument(
"--input-file",
"-i",
type=str,
required=True,
help="Which files do you want to train from",
)
parser.add_argument(
"--model-file",
"-m",
type=str,
required=False,
default=None,
help="Use a pretrained token file",
)
parser.add_argument(
"--model-prefix",
type=str,
default="spm_parity",
help="Model prefix for spm_train",
)
parser.add_argument(
"--vocab-size",
"-v",
type=int,
default=8000,
help="Vocab size for spm_train",
)
parser.add_argument(
"--verbose",
action="store_true",
help="Verbosity",
)
parser.add_argument(
"--train",
action="store_true",
help="Instead of checking the encoder part, we check the trainer part",
)
parser.add_argument(
"--from-spm",
action="store_true",
help="Directly load the spm file with it's own normalizer",
)
args = parser.parse_args()
trained = False
if args.model_file is None:
spm.SentencePieceTrainer.Train(
f"--input={args.input_file} --model_prefix={args.model_prefix}"
f" --character_coverage=1.0"
f" --max_sentence_length=40000"
f" --num_threads=1"
f" --vocab_size={args.vocab_size}"
)
trained = True
args.model_file = f"{args.model_prefix}.model"
try:
if args.train:
check_train(args)
else:
check_encode(args)
finally:
if trained:
os.remove(f"{args.model_prefix}.model")
os.remove(f"{args.model_prefix}.vocab")
def check_train(args):
sp = spm.SentencePieceProcessor()
sp.Load(args.model_file)
tokenizer = tokenizers.SentencePieceUnigramTokenizer()
tokenizer.train(args.input_file, show_progress=False)
spm_tokens = 0
tokenizer_tokens = 0
with open(args.input_file, "r") as f:
for i, line in enumerate(f):
line = line.strip()
ids = sp.EncodeAsIds(line)
encoded = tokenizer.encode(line)
spm_tokens += len(ids)
tokenizer_tokens += len(encoded.ids)
vocab = [0 for i in range(args.vocab_size)]
spm_vocab = [0 for i in range(args.vocab_size)]
for token, index in tokenizer.get_vocab().items():
vocab[index] = token
for i in range(args.vocab_size):
spm_vocab[i] = sp.id_to_piece(i)
# 0 is unk in tokenizers, 0, 1, 2 are unk bos, eos in spm by default.
for i, (token, spm_token) in enumerate(zip(vocab[1:], spm_vocab[3:])):
if token != spm_token:
print(f"First different token is token {i} ({token} != {spm_token})")
break
print(f"Tokenizer used {tokenizer_tokens}, where spm used {spm_tokens}")
assert tokenizer_tokens < spm_tokens, "Our trainer should be at least more efficient than the SPM one"
print("Ok our trainer is at least more efficient than the SPM one")
def check_diff(spm_diff, tok_diff, sp, tok):
if spm_diff == list(reversed(tok_diff)):
# AAA -> AA+A vs A+AA case.
return True
elif len(spm_diff) == len(tok_diff) and tok.decode(spm_diff) == tok.decode(tok_diff):
# Second order OK
# Barrich -> Barr + ich vs Bar + rich
return True
spm_reencoded = sp.encode(sp.decode(spm_diff))
tok_reencoded = tok.encode(tok.decode(spm_diff)).ids
if spm_reencoded != spm_diff and spm_reencoded == tok_reencoded:
# Type 3 error.
# Snehagatha ->
# Sne, h, aga, th, a
# Sne, ha, gat, ha
# Encoding the wrong with sp does not even recover what spm gave us
# It fits tokenizer however...
return True
return False
def check_details(line, spm_ids, tok_ids, sp, tok):
# Encoding can be the same with same result AAA -> A + AA vs AA + A
# We can check that we use at least exactly the same number of tokens.
for i, (spm_id, tok_id) in enumerate(zip(spm_ids, tok_ids)):
if spm_id != tok_id:
break
first = i
for i, (spm_id, tok_id) in enumerate(zip(reversed(spm_ids), reversed(tok_ids))):
if spm_id != tok_id:
break
last = len(spm_ids) - i
spm_diff = spm_ids[first:last]
tok_diff = tok_ids[first:last]
if check_diff(spm_diff, tok_diff, sp, tok):
return True
if last - first > 5:
# We might have twice a single problem, attempt to subdivide the disjointed tokens into smaller problems
spms = Counter(spm_ids[first:last])
toks = Counter(tok_ids[first:last])
removable_tokens = {spm_ for (spm_, si) in spms.items() if toks.get(spm_, 0) == si}
min_width = 3
for i in range(last - first - min_width):
if all(spm_ids[first + i + j] in removable_tokens for j in range(min_width)):
possible_matches = [
k
for k in range(last - first - min_width)
if tok_ids[first + k : first + k + min_width] == spm_ids[first + i : first + i + min_width]
]
for j in possible_matches:
if check_diff(spm_ids[first : first + i], tok_ids[first : first + j], sp, tok) and check_details(
line,
spm_ids[first + i : last],
tok_ids[first + j : last],
sp,
tok,
):
return True
print(f"Spm: {[tok.decode([spm_ids[i]]) for i in range(first, last)]}")
try:
print(f"Tok: {[tok.decode([tok_ids[i]]) for i in range(first, last)]}")
except Exception:
pass
ok_start = tok.decode(spm_ids[:first])
ok_end = tok.decode(spm_ids[last:])
wrong = tok.decode(spm_ids[first:last])
print()
if has_color:
print(f"{colored(ok_start, 'grey')}{colored(wrong, 'red')}{colored(ok_end, 'grey')}")
else:
print(wrong)
return False
def check_encode(args):
sp = spm.SentencePieceProcessor()
sp.Load(args.model_file)
if args.from_spm:
tok = tokenizers.SentencePieceUnigramTokenizer.from_spm(args.model_file)
else:
vocab = [(sp.id_to_piece(i), sp.get_score(i)) for i in range(sp.piece_size())]
unk_id = sp.unk_id()
tok = tokenizers.SentencePieceUnigramTokenizer(vocab, unk_id)
perfect = 0
imperfect = 0
wrong = 0
now = datetime.datetime.now
spm_total_time = datetime.timedelta(seconds=0)
tok_total_time = datetime.timedelta(seconds=0)
with open(args.input_file, "r", encoding="utf-8-sig") as f:
for i, line in enumerate(f):
line = line.strip()
start = now()
ids = sp.EncodeAsIds(line)
spm_time = now()
encoded = tok.encode(line)
tok_time = now()
spm_total_time += spm_time - start
tok_total_time += tok_time - spm_time
if args.verbose:
if i % 10000 == 0:
print(f"({perfect} / {imperfect} / {wrong} ----- {perfect + imperfect + wrong})")
print(f"SPM: {spm_total_time} - TOK: {tok_total_time}")
if ids != encoded.ids:
if check_details(line, ids, encoded.ids, sp, tok):
imperfect += 1
continue
else:
wrong += 1
else:
perfect += 1
assert (
ids == encoded.ids
), f"line {i}: {line} : \n\n{ids}\n{encoded.ids}\n{list(zip(encoded.ids, encoded.tokens))}"
print(f"({perfect} / {imperfect} / {wrong} ----- {perfect + imperfect + wrong})")
total = perfect + imperfect + wrong
print(f"Accuracy {perfect * 100 / total:.2f} Slowdown : {tok_total_time/ spm_total_time:.2f}")
if __name__ == "__main__":
main()
| tokenizers/bindings/python/scripts/spm_parity_check.py/0 | {
"file_path": "tokenizers/bindings/python/scripts/spm_parity_check.py",
"repo_id": "tokenizers",
"token_count": 4110
} |
use tokenizers as tk;
use pyo3::exceptions;
use pyo3::prelude::*;
use pyo3::types::*;
use super::{
DestroyPtr, PyNormalizedString, PyNormalizedStringRefMut, RefMutContainer, RefMutGuard,
};
use crate::encoding::PyEncoding;
use crate::error::ToPyResult;
use crate::token::PyToken;
use tk::{OffsetReferential, OffsetType, Offsets, PreTokenizedString, Token};
fn split(pretok: &mut PreTokenizedString, func: &Bound<'_, PyAny>) -> PyResult<()> {
if !func.is_callable() {
Err(exceptions::PyTypeError::new_err(
"`split` expect a callable with the signature: \
`fn(index: int, normalized: NormalizedString) -> List[NormalizedString]`",
))
} else {
ToPyResult(pretok.split(|i, normalized| {
let output = func.call((i, PyNormalizedString::from(normalized)), None)?;
Ok(output
.extract::<Vec<PyNormalizedString>>()?
.into_iter()
.map(tk::NormalizedString::from))
}))
.into()
}
}
fn normalize(pretok: &mut PreTokenizedString, func: &Bound<'_, PyAny>) -> PyResult<()> {
if !func.is_callable() {
Err(exceptions::PyTypeError::new_err(
"`normalize` expect a callable with the signature: \
`fn(normalized: NormalizedString)`",
))
} else {
ToPyResult(pretok.normalize(|normalized| {
let norm = PyNormalizedStringRefMut::new(normalized);
func.call((norm.get().clone(),), None)?;
Ok(())
}))
.into()
}
}
fn tokenize(pretok: &mut PreTokenizedString, func: &Bound<'_, PyAny>) -> PyResult<()> {
if !func.is_callable() {
Err(exceptions::PyTypeError::new_err(
"`tokenize` expect a callable with the signature: \
`fn(str) -> List[Token]`",
))
} else {
ToPyResult(pretok.tokenize(|normalized| {
let output = func.call((normalized.get(),), None)?;
Ok(output
.extract::<Bound<PyList>>()?
.into_iter()
.map(|obj| Ok(Token::from(obj.extract::<PyToken>()?)))
.collect::<PyResult<Vec<_>>>()?)
}))
.into()
}
}
/// This is an enum
#[derive(Clone)]
pub struct PyOffsetReferential(OffsetReferential);
impl FromPyObject<'_> for PyOffsetReferential {
fn extract_bound(obj: &Bound<'_, PyAny>) -> PyResult<Self> {
let s = obj.extract::<String>()?;
Ok(Self(match s.as_ref() {
"original" => Ok(OffsetReferential::Original),
"normalized" => Ok(OffsetReferential::Normalized),
_ => Err(exceptions::PyValueError::new_err(
"Wrong value for OffsetReferential, expected one of `original, normalized`",
)),
}?))
}
}
#[derive(Clone)]
pub struct PyOffsetType(OffsetType);
impl FromPyObject<'_> for PyOffsetType {
fn extract_bound(obj: &Bound<'_, PyAny>) -> PyResult<Self> {
let s = obj.extract::<String>()?;
Ok(Self(match s.as_ref() {
"byte" => Ok(OffsetType::Byte),
"char" => Ok(OffsetType::Char),
_ => Err(exceptions::PyValueError::new_err(
"Wrong value for OffsetType, expected one of `byte, char`",
)),
}?))
}
}
type PySplit = (String, Offsets, Option<Vec<PyToken>>);
fn get_splits(
pretok: &PreTokenizedString,
offset_referential: PyOffsetReferential,
offset_type: PyOffsetType,
) -> Vec<PySplit> {
pretok
.get_splits(offset_referential.0, offset_type.0)
.into_iter()
.map(|(s, o, t)| {
(
s.to_owned(),
o,
t.as_ref()
.map(|tokens| tokens.iter().map(|t| t.clone().into()).collect()),
)
})
.collect()
}
fn to_encoding(
pretok: &PreTokenizedString,
type_id: u32,
word_idx: Option<u32>,
) -> PyResult<PyEncoding> {
Ok(ToPyResult(
pretok
.clone()
.into_encoding(word_idx, type_id, tk::OffsetType::Char),
)
.into_py()?
.into())
}
/// PreTokenizedString
///
/// Wrapper over a string, that provides a way to normalize, pre-tokenize, tokenize the
/// underlying string, while keeping track of the alignment information (offsets).
///
/// The PreTokenizedString manages what we call `splits`. Each split represents a substring
/// which is a subpart of the original string, with the relevant offsets and tokens.
///
/// When calling one of the methods used to modify the PreTokenizedString (namely one of
/// `split`, `normalize` or `tokenize), only the `splits` that don't have any associated
/// tokens will get modified.
///
/// Args:
/// sequence: str:
/// The string sequence used to initialize this PreTokenizedString
#[pyclass(module = "tokenizers", name = "PreTokenizedString")]
pub struct PyPreTokenizedString {
pub(crate) pretok: tk::PreTokenizedString,
}
impl From<PreTokenizedString> for PyPreTokenizedString {
fn from(pretok: PreTokenizedString) -> Self {
Self { pretok }
}
}
impl From<PyPreTokenizedString> for PreTokenizedString {
fn from(pretok: PyPreTokenizedString) -> Self {
pretok.pretok
}
}
#[pymethods]
impl PyPreTokenizedString {
#[new]
#[pyo3(text_signature = "(self, sequence)")]
fn new(s: &str) -> Self {
PreTokenizedString::from(s).into()
}
/// Split the PreTokenizedString using the given `func`
///
/// Args:
/// func: Callable[[index, NormalizedString], List[NormalizedString]]:
/// The function used to split each underlying split.
/// It is expected to return a list of `NormalizedString`, that represent the new
/// splits. If the given `NormalizedString` does not need any splitting, we can
/// just return it directly.
/// In order for the offsets to be tracked accurately, any returned `NormalizedString`
/// should come from calling either `.split` or `.slice` on the received one.
#[pyo3(text_signature = "(self, func)")]
fn split(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> {
split(&mut self.pretok, func)
}
/// Normalize each split of the `PreTokenizedString` using the given `func`
///
/// Args:
/// func: Callable[[NormalizedString], None]:
/// The function used to normalize each underlying split. This function
/// does not need to return anything, just calling the methods on the provided
/// NormalizedString allow its modification.
#[pyo3(text_signature = "(self, func)")]
fn normalize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> {
normalize(&mut self.pretok, func)
}
/// Tokenize each split of the `PreTokenizedString` using the given `func`
///
/// Args:
/// func: Callable[[str], List[Token]]:
/// The function used to tokenize each underlying split. This function must return
/// a list of Token generated from the input str.
#[pyo3(text_signature = "(self, func)")]
fn tokenize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> {
tokenize(&mut self.pretok, func)
}
/// Return an Encoding generated from this PreTokenizedString
///
/// Args:
/// type_id: int = 0:
/// The type_id to be used on the generated Encoding.
///
/// word_idx: Optional[int] = None:
/// An optional word index to be used for each token of this Encoding. If provided,
/// all the word indices in the generated Encoding will use this value, instead
/// of the one automatically tracked during pre-tokenization.
///
/// Returns:
/// An Encoding
#[pyo3(signature = (type_id = 0, word_idx = None))]
#[pyo3(text_signature = "(self, type_id=0, word_idx=None)")]
fn to_encoding(&self, type_id: u32, word_idx: Option<u32>) -> PyResult<PyEncoding> {
to_encoding(&self.pretok, type_id, word_idx)
}
/// Get the splits currently managed by the PreTokenizedString
///
/// Args:
/// offset_referential: :obj:`str`
/// Whether the returned splits should have offsets expressed relative
/// to the original string, or the normalized one. choices: "original", "normalized".
///
/// offset_type: :obj:`str`
/// Whether the returned splits should have offsets expressed in bytes or chars.
/// When slicing an str, we usually want to use chars, which is the default value.
/// Now in some cases it might be interesting to get these offsets expressed in bytes,
/// so it is possible to change this here.
/// choices: "char", "bytes"
///
/// Returns
/// A list of splits
#[pyo3(signature = (
offset_referential = PyOffsetReferential(OffsetReferential::Original),
offset_type = PyOffsetType(OffsetType::Char)
))]
#[pyo3(text_signature = "(self, offset_referential=\"original\", offset_type=\"char\")")]
fn get_splits(
&self,
offset_referential: PyOffsetReferential,
offset_type: PyOffsetType,
) -> Vec<PySplit> {
get_splits(&self.pretok, offset_referential, offset_type)
}
}
#[pyclass(module = "tokenizers", name = "PreTokenizedString")]
#[derive(Clone)]
pub struct PyPreTokenizedStringRefMut {
inner: RefMutContainer<PreTokenizedString>,
}
impl DestroyPtr for PyPreTokenizedStringRefMut {
fn destroy(&mut self) {
self.inner.destroy();
}
}
impl PyPreTokenizedStringRefMut {
pub fn new(pretok: &mut tk::PreTokenizedString) -> RefMutGuard<'_, Self> {
// SAFETY: This is safe because we return a RefMutGuard here.
// The compiler will make sure the &mut stays valid as necessary.
RefMutGuard::new(Self {
inner: RefMutContainer::new(pretok),
})
}
pub fn destroyed_error() -> PyErr {
exceptions::PyException::new_err(
"Cannot use a PreTokenizedStringRefMut outside `pre_tokenize`",
)
}
}
#[pymethods]
impl PyPreTokenizedStringRefMut {
fn split(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> {
self.inner
.map_mut(|pretok| split(pretok, func))
.ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)?
}
fn normalize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> {
self.inner
.map_mut(|pretok| normalize(pretok, func))
.ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)?
}
fn tokenize(&mut self, func: &Bound<'_, PyAny>) -> PyResult<()> {
self.inner
.map_mut(|pretok| tokenize(pretok, func))
.ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)?
}
#[pyo3(signature = (type_id = 0, word_idx = None))]
fn to_encoding(&self, type_id: u32, word_idx: Option<u32>) -> PyResult<PyEncoding> {
self.inner
.map(|pretok| to_encoding(pretok, type_id, word_idx))
.ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)?
}
#[pyo3(signature = (
offset_referential = PyOffsetReferential(OffsetReferential::Original),
offset_type = PyOffsetType(OffsetType::Char)
))]
fn get_splits(
&self,
offset_referential: PyOffsetReferential,
offset_type: PyOffsetType,
) -> PyResult<Vec<PySplit>> {
self.inner
.map(|pretok| get_splits(pretok, offset_referential, offset_type))
.ok_or_else(PyPreTokenizedStringRefMut::destroyed_error)
}
}
| tokenizers/bindings/python/src/utils/pretokenization.rs/0 | {
"file_path": "tokenizers/bindings/python/src/utils/pretokenization.rs",
"repo_id": "tokenizers",
"token_count": 4958
} |
from tokenizers import Tokenizer
from ..utils import data_dir, doc_pipeline_bert_tokenizer, doc_wiki_tokenizer
disable_printing = True
original_print = print
def print(*args, **kwargs):
if not disable_printing:
original_print(*args, **kwargs)
class TestPipeline:
def test_pipeline(self, doc_wiki_tokenizer):
try:
# START reload_tokenizer
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_file("data/tokenizer-wiki.json")
# END reload_tokenizer
except Exception:
tokenizer = Tokenizer.from_file(doc_wiki_tokenizer)
# START setup_normalizer
from tokenizers import normalizers
from tokenizers.normalizers import NFD, StripAccents
normalizer = normalizers.Sequence([NFD(), StripAccents()])
# END setup_normalizer
# START test_normalizer
normalizer.normalize_str("Héllò hôw are ü?")
# "Hello how are u?"
# END test_normalizer
assert normalizer.normalize_str("Héllò hôw are ü?") == "Hello how are u?"
# START replace_normalizer
tokenizer.normalizer = normalizer
# END replace_normalizer
# START setup_pre_tokenizer
from tokenizers.pre_tokenizers import Whitespace
pre_tokenizer = Whitespace()
pre_tokenizer.pre_tokenize_str("Hello! How are you? I'm fine, thank you.")
# [("Hello", (0, 5)), ("!", (5, 6)), ("How", (7, 10)), ("are", (11, 14)), ("you", (15, 18)),
# ("?", (18, 19)), ("I", (20, 21)), ("'", (21, 22)), ('m', (22, 23)), ("fine", (24, 28)),
# (",", (28, 29)), ("thank", (30, 35)), ("you", (36, 39)), (".", (39, 40))]
# END setup_pre_tokenizer
assert pre_tokenizer.pre_tokenize_str("Hello! How are you? I'm fine, thank you.") == [
("Hello", (0, 5)),
("!", (5, 6)),
("How", (7, 10)),
("are", (11, 14)),
("you", (15, 18)),
("?", (18, 19)),
("I", (20, 21)),
("'", (21, 22)),
("m", (22, 23)),
("fine", (24, 28)),
(",", (28, 29)),
("thank", (30, 35)),
("you", (36, 39)),
(".", (39, 40)),
]
# START combine_pre_tokenizer
from tokenizers import pre_tokenizers
from tokenizers.pre_tokenizers import Digits
pre_tokenizer = pre_tokenizers.Sequence([Whitespace(), Digits(individual_digits=True)])
pre_tokenizer.pre_tokenize_str("Call 911!")
# [("Call", (0, 4)), ("9", (5, 6)), ("1", (6, 7)), ("1", (7, 8)), ("!", (8, 9))]
# END combine_pre_tokenizer
assert pre_tokenizer.pre_tokenize_str("Call 911!") == [
("Call", (0, 4)),
("9", (5, 6)),
("1", (6, 7)),
("1", (7, 8)),
("!", (8, 9)),
]
# START replace_pre_tokenizer
tokenizer.pre_tokenizer = pre_tokenizer
# END replace_pre_tokenizer
# START setup_processor
from tokenizers.processors import TemplateProcessing
tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[("[CLS]", 1), ("[SEP]", 2)],
)
# END setup_processor
# START test_decoding
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.ids)
# [1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2]
tokenizer.decode([1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2])
# "Hello , y ' all ! How are you ?"
# END test_decoding
assert output.ids == [1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2]
assert (
tokenizer.decode([1, 27253, 16, 93, 11, 5097, 5, 7961, 5112, 6218, 0, 35, 2])
== "Hello , y ' all ! How are you ?"
)
@staticmethod
def slow_train():
# START bert_setup_tokenizer
from tokenizers import Tokenizer
from tokenizers.models import WordPiece
bert_tokenizer = Tokenizer(WordPiece(unk_token="[UNK]"))
# END bert_setup_tokenizer
# START bert_setup_normalizer
from tokenizers import normalizers
from tokenizers.normalizers import NFD, Lowercase, StripAccents
bert_tokenizer.normalizer = normalizers.Sequence([NFD(), Lowercase(), StripAccents()])
# END bert_setup_normalizer
# START bert_setup_pre_tokenizer
from tokenizers.pre_tokenizers import Whitespace
bert_tokenizer.pre_tokenizer = Whitespace()
# END bert_setup_pre_tokenizer
# START bert_setup_processor
from tokenizers.processors import TemplateProcessing
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
],
)
# END bert_setup_processor
# START bert_train_tokenizer
from tokenizers.trainers import WordPieceTrainer
trainer = WordPieceTrainer(vocab_size=30522, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
files = [f"data/wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]]
bert_tokenizer.train(files, trainer)
bert_tokenizer.save("data/bert-wiki.json")
# END bert_train_tokenizer
def test_bert_example(self, doc_pipeline_bert_tokenizer):
try:
bert_tokenizer = Tokenizer.from_file("data/bert-wiki.json")
except Exception:
bert_tokenizer = Tokenizer.from_file(doc_pipeline_bert_tokenizer)
# START bert_test_decoding
output = bert_tokenizer.encode("Welcome to the 🤗 Tokenizers library.")
print(output.tokens)
# ["[CLS]", "welcome", "to", "the", "[UNK]", "tok", "##eni", "##zer", "##s", "library", ".", "[SEP]"]
bert_tokenizer.decode(output.ids)
# "welcome to the tok ##eni ##zer ##s library ."
# END bert_test_decoding
assert bert_tokenizer.decode(output.ids) == "welcome to the tok ##eni ##zer ##s library ."
# START bert_proper_decoding
from tokenizers import decoders
bert_tokenizer.decoder = decoders.WordPiece()
bert_tokenizer.decode(output.ids)
# "welcome to the tokenizers library."
# END bert_proper_decoding
assert bert_tokenizer.decode(output.ids) == "welcome to the tokenizers library."
if __name__ == "__main__":
import os
from urllib import request
from zipfile import ZipFile
disable_printing = False
if not os.path.isdir("data/wikitext-103-raw"):
print("Downloading wikitext-103...")
wiki_text, _ = request.urlretrieve(
"https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip"
)
with ZipFile(wiki_text, "r") as z:
print("Unzipping in data...")
z.extractall("data")
print("Now training...")
TestPipeline.slow_train()
| tokenizers/bindings/python/tests/documentation/test_pipeline.py/0 | {
"file_path": "tokenizers/bindings/python/tests/documentation/test_pipeline.py",
"repo_id": "tokenizers",
"token_count": 3351
} |
# Encode Inputs
<tokenizerslangcontent>
<python>
These types represent all the different kinds of input that a [`~tokenizers.Tokenizer`] accepts
when using [`~tokenizers.Tokenizer.encode_batch`].
## TextEncodeInput[[[[tokenizers.TextEncodeInput]]]]
<code>tokenizers.TextEncodeInput</code>
Represents a textual input for encoding. Can be either:
- A single sequence: [TextInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.TextInputSequence)
- A pair of sequences:
- A Tuple of [TextInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.TextInputSequence)
- Or a List of [TextInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.TextInputSequence) of size 2
alias of `Union[str, Tuple[str, str], List[str]]`.
## PreTokenizedEncodeInput[[[[tokenizers.PreTokenizedEncodeInput]]]]
<code>tokenizers.PreTokenizedEncodeInput</code>
Represents a pre-tokenized input for encoding. Can be either:
- A single sequence: [PreTokenizedInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.PreTokenizedInputSequence)
- A pair of sequences:
- A Tuple of [PreTokenizedInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.PreTokenizedInputSequence)
- Or a List of [PreTokenizedInputSequence](/docs/tokenizers/api/input-sequences#tokenizers.PreTokenizedInputSequence) of size 2
alias of `Union[List[str], Tuple[str], Tuple[Union[List[str], Tuple[str]], Union[List[str], Tuple[str]]], List[Union[List[str], Tuple[str]]]]`.
## EncodeInput[[[[tokenizers.EncodeInput]]]]
<code>tokenizers.EncodeInput</code>
Represents all the possible types of input for encoding. Can be:
- When `is_pretokenized=False`: [TextEncodeInput](#tokenizers.TextEncodeInput)
- When `is_pretokenized=True`: [PreTokenizedEncodeInput](#tokenizers.PreTokenizedEncodeInput)
alias of `Union[str, Tuple[str, str], List[str], Tuple[str], Tuple[Union[List[str], Tuple[str]], Union[List[str], Tuple[str]]], List[Union[List[str], Tuple[str]]]]`.
</python>
<rust>
The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website.
</rust>
<node>
The node API has not been documented yet.
</node>
</tokenizerslangcontent> | tokenizers/docs/source-doc-builder/api/encode-inputs.mdx/0 | {
"file_path": "tokenizers/docs/source-doc-builder/api/encode-inputs.mdx",
"repo_id": "tokenizers",
"token_count": 716
} |
from collections import defaultdict, abc
from typing import cast
from docutils import nodes
from docutils.parsers.rst import Directive
import sphinx
from sphinx.locale import _
from sphinx.util.docutils import SphinxDirective
from sphinx.errors import ExtensionError
from conf import languages as LANGUAGES
logger = sphinx.util.logging.getLogger(__name__)
GLOBALNAME = "$GLOBAL$"
def update(d, u):
for k, v in u.items():
if isinstance(v, abc.Mapping):
d[k] = update(d.get(k, {}), v)
else:
d[k] = v
return d
class EntityNode(nodes.General, nodes.Element):
pass
class EntitiesNode(nodes.General, nodes.Element):
pass
class AllEntities:
def __init__(self):
self.entities = defaultdict(dict)
@classmethod
def install(cls, env):
if not hasattr(env, "entity_all_entities"):
entities = cls()
env.entity_all_entities = entities
return env.entity_all_entities
def merge(self, other):
self.entities.update(other.entities)
def purge(self, docname):
for env_docname in [GLOBALNAME, docname]:
self.entities[env_docname] = dict(
[
(name, entity)
for name, entity in self.entities[env_docname].items()
if entity["docname"] != docname
]
)
def _extract_entities(self, nodes):
pass
def _extract_options(self, nodes):
pass
def _add_entities(self, entities, language, is_global, docname):
scope = GLOBALNAME if is_global else docname
for entity in entities:
name = f'{language}-{entity["name"]}'
content = entity["content"]
if name in self.entities[scope]:
logger.warning(
f'Entity "{name}" has already been defined{" globally" if is_global else ""}',
location=docname,
)
self.entities[scope][name] = {"docname": docname, "content": content}
def _extract_global(self, nodes):
for node in nodes:
if node.tagname != "field":
raise Exception(f"Expected a field, found {node.tagname}")
name, _ = node.children
if name.tagname != "field_name":
raise Exception(f"Expected a field name here, found {name_node.tagname}")
if str(name.children[0]) == "global":
return True
def _extract_entities(self, nodes):
entities = []
for node in nodes:
if node.tagname != "definition_list_item":
raise Exception(f"Expected a list item here, found {node.tagname}")
name_node, content_node = node.children
if name_node.tagname != "term":
raise Exception(f"Expected a term here, found {name_node.tagname}")
if content_node.tagname != "definition":
raise Exception(f"Expected a definition here, found {content_node.tagname}")
name = str(name_node.children[0])
if len(content_node.children) == 1 and content_node.children[0].tagname == "paragraph":
content = content_node.children[0].children[0]
else:
content = content_node
entities.append({"name": name, "content": content})
return entities
def extract(self, node, docname):
is_global = False
entities = []
language = None
for node in node.children:
if language is None and node.tagname != "paragraph":
raise Exception(f"Expected language name:\n.. entities:: <LANGUAGE>")
elif language is None and node.tagname == "paragraph":
language = str(node.children[0])
if language not in LANGUAGES:
raise Exception(
f'Unknown language "{language}. Might be missing a newline after language"'
)
elif node.tagname == "field_list":
is_global = self._extract_global(node.children)
elif node.tagname == "definition_list":
entities.extend(self._extract_entities(node.children))
else:
raise Exception(f"Expected a list of terms/options, found {node.tagname}")
self._add_entities(entities, language, is_global, docname)
def resolve_pendings(self, app):
env = app.builder.env
updates = defaultdict(dict)
for env_docname in self.entities.keys():
for name, entity in self.entities[env_docname].items():
docname = entity["docname"]
node = entity["content"]
for node in node.traverse(sphinx.addnodes.pending_xref):
contnode = cast(nodes.TextElement, node[0].deepcopy())
newnode = None
typ = node["reftype"]
target = node["reftarget"]
refdoc = node.get("refdoc", docname)
domain = None
try:
if "refdomain" in node and node["refdomain"]:
# let the domain try to resolve the reference
try:
domain = env.domains[node["refdomain"]]
except KeyError as exc:
raise NoUri(target, typ) from exc
newnode = domain.resolve_xref(
env, refdoc, app.builder, typ, target, node, contnode
)
except NoUri:
newnode = contnode
updates[env_docname][name] = {
"docname": docname,
"content": newnode or contnode,
}
update(self.entities, updates)
def get(self, language, name, docname):
name = f"{language}-{name}"
if name in self.entities[docname]:
return self.entities[docname][name]
elif name in self.entities[GLOBALNAME]:
return self.entities[GLOBALNAME][name]
else:
return None
class EntitiesDirective(SphinxDirective):
has_content = True
def run(self):
content = nodes.definition_list()
self.state.nested_parse(self.content, self.content_offset, content)
try:
entities = AllEntities.install(self.env)
entities.extract(content, self.env.docname)
except Exception as err:
raise self.error(f'Malformed directive "entities": {err}')
return []
def entity_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
node = EntityNode()
node.entity = text
return [node], []
def process_entity_nodes(app, doctree, docname):
""" Replace all the entities by their content """
env = app.builder.env
entities = AllEntities.install(env)
entities.resolve_pendings(app)
language = None
try:
language = next(l for l in LANGUAGES if l in app.tags)
except Exception:
logger.warning(f"No language tag specified, not resolving entities in {docname}")
for node in doctree.traverse(EntityNode):
if language is None:
node.replace_self(nodes.Text(_(node.entity), _(node.entity)))
else:
entity = entities.get(language, node.entity, docname)
if entity is None:
node.replace_self(nodes.Text(_(node.entity), _(node.entity)))
logger.warning(f'Entity "{node.entity}" has not been defined', location=node)
else:
node.replace_self(entity["content"])
def purge_entities(app, env, docname):
""" Purge any entity that comes from the given docname """
entities = AllEntities.install(env)
entities.purge(docname)
def merge_entities(app, env, docnames, other):
""" Merge multiple environment entities """
entities = AllEntities.install(env)
other_entities = AllEntities.install(other)
entities.merge(other_entities)
def setup(app):
app.add_node(EntityNode)
app.add_node(EntitiesNode)
app.add_directive("entities", EntitiesDirective)
app.add_role("entity", entity_role)
app.connect("doctree-resolved", process_entity_nodes)
app.connect("env-merge-info", merge_entities)
app.connect("env-purge-doc", purge_entities)
return {
"version": "0.1",
"parallel_read_safe": True,
"parallel_write_safe": True,
}
| tokenizers/docs/source/_ext/entities.py/0 | {
"file_path": "tokenizers/docs/source/_ext/entities.py",
"repo_id": "tokenizers",
"token_count": 4032
} |
.. entities:: python
:global:
class
class
classmethod
class method
Tokenizer
:class:`~tokenizers.Tokenizer`
Tokenizer.train
:meth:`~tokenizers.Tokenizer.train`
Tokenizer.save
:meth:`~tokenizers.Tokenizer.save`
Tokenizer.from_file
:meth:`~tokenizers.Tokenizer.from_file`
Tokenizer.encode
:meth:`~tokenizers.Tokenizer.encode`
Tokenizer.encode_batch
:meth:`~tokenizers.Tokenizer.encode_batch`
Tokenizer.decode
:meth:`~tokenizers.Tokenizer.decode`
Tokenizer.decode_batch
:meth:`~tokenizers.Tokenizer.decode_batch`
Tokenizer.token_to_id
:meth:`~tokenizers.Tokenizer.token_to_id`
Tokenizer.enable_padding
:meth:`~tokenizers.Tokenizer.enable_padding`
Encoding
:class:`~tokenizers.Encoding`
TemplateProcessing
:class:`~tokenizers.processors.TemplateProcessing`
Normalizer
:class:`~tokenizers.normalizers.Normalizer`
normalizers.Sequence
:class:`~tokenizers.normalizers.Sequence`
pre_tokenizers.Whitespace
:class:`~tokenizers.pre_tokenizers.Whitespace`
PreTokenizer
:class:`~tokenizers.pre_tokenizers.PreTokenizer`
models.BPE
:class:`~tokenizers.models.BPE`
models.Unigram
:class:`~tokenizers.models.Unigram`
models.WordLevel
:class:`~tokenizers.models.WordLevel`
models.WordPiece
:class:`~tokenizers.models.WordPiece`
Decoder
:class:`~tokenizers.decoders.Decoder`
.. entities:: rust
:global:
class
struct
classmethod
static method
Tokenizer
:rust_struct:`~tokenizers::tokenizer::Tokenizer`
Tokenizer.train
:rust_meth:`~tokenizers::tokenizer::Tokenizer::train`
Tokenizer.save
:rust_meth:`~tokenizers::tokenizer::Tokenizer::save`
Tokenizer.from_file
:rust_meth:`~tokenizers::tokenizer::Tokenizer::from_file`
Tokenizer.encode
:rust_meth:`~tokenizers::tokenizer::Tokenizer::encode`
Tokenizer.encode_batch
:rust_meth:`~tokenizers::tokenizer::Tokenizer::encode_batch`
Tokenizer.decode
:rust_meth:`~tokenizers::tokenizer::Tokenizer::decode`
Tokenizer.decode_batch
:rust_meth:`~tokenizers::tokenizer::Tokenizer::decode_batch`
Tokenizer.token_to_id
:rust_meth:`~tokenizers::tokenizer::Tokenizer::token_to_id`
Tokenizer.enable_padding
:rust_meth:`~tokenizers::tokenizer::Tokenizer::enable_padding`
Encoding
:rust_struct:`~tokenizers::tokenizer::Encoding`
TemplateProcessing
:rust_struct:`~tokenizers::processors::template::TemplateProcessing`
Normalizer
:rust_trait:`~tokenizers::tokenizer::Normalizer`
normalizers.Sequence
:rust_struct:`~tokenizers::normalizers::utils::Sequence`
pre_tokenizers.Whitespace
:rust_struct:`~tokenizers::normalizers::whitespace::Whitespace`
PreTokenizer
:rust_trait:`~tokenizers::tokenizer::PreTokenizer`
models.BPE
:rust_struct:`~tokenizers::models::bpe::BPE`
models.Unigram
:rust_struct:`~tokenizers::models::unigram::Unigram`
models.WordLevel
:rust_struct:`~tokenizers::models::wordlevel::WordLevel`
models.WordPiece
:rust_struct:`~tokenizers::models::wordpiece::WordPiece`
Decoder
:rust_trait:`~tokenizers::tokenizer::Decoder`
.. entities:: node
:global:
class
class
classmethod
static method
Tokenizer
:obj:`Tokenizer`
Tokenizer.train
:obj:`Tokenizer.train()`
Tokenizer.save
:obj:`Tokenizer.save()`
Tokenizer.from_file
:obj:`Tokenizer.fromFile()`
Tokenizer.encode
:obj:`Tokenizer.encode()`
Tokenizer.encode_batch
:obj:`Tokenizer.encodeBatch()`
Tokenizer.decode
:obj:`Tokenizer.decode()`
Tokenizer.decode_batch
:obj:`Tokenizer.decodeBatch()`
Tokenizer.token_to_id
:obj:`Tokenizer.tokenToId()`
Tokenizer.enable_padding
:obj:`Tokenizer.setPadding()`
Encoding
:obj:`Encoding`
TemplateProcessing
:obj:`TemplateProcessing`
Normalizer
:obj:`Normalizer`
normalizers.Sequence
:obj:`Sequence`
pre_tokenizers.Whitespace
:obj:`Whitespace`
PreTokenizer
:obj:`PreTokenizer`
models.BPE
:obj:`BPE`
models.Unigram
:obj:`Unigram`
models.WordLevel
:obj:`WordLevel`
models.WordPiece
:obj:`WordPiece`
Decoder
:obj:`Decoder`
| tokenizers/docs/source/entities.inc/0 | {
"file_path": "tokenizers/docs/source/entities.inc",
"repo_id": "tokenizers",
"token_count": 2078
} |
#[macro_use]
extern crate criterion;
mod common;
use std::fs::File;
use std::io::{BufRead, BufReader};
use std::path::Path;
use criterion::Criterion;
use tokenizers::models::bpe::{BpeTrainerBuilder, BPE};
use tokenizers::models::TrainerWrapper;
use tokenizers::pre_tokenizers::byte_level::ByteLevel;
use tokenizers::pre_tokenizers::whitespace::Whitespace;
use tokenizers::tokenizer::{AddedToken, EncodeInput};
use tokenizers::Tokenizer;
use common::{iter_bench_encode, iter_bench_encode_batch, iter_bench_train};
use std::ops::Deref;
static BATCH_SIZE: usize = 1_000;
fn create_gpt2_tokenizer(bpe: BPE) -> Tokenizer {
let mut tokenizer = Tokenizer::new(bpe);
tokenizer.with_pre_tokenizer(Some(ByteLevel::default()));
tokenizer.with_decoder(Some(ByteLevel::default()));
tokenizer.add_tokens(&[AddedToken::from("ing", false).single_word(false)]);
tokenizer.add_special_tokens(&[AddedToken::from("[ENT]", true).single_word(true)]);
tokenizer
}
fn bench_gpt2(c: &mut Criterion) {
let bpe = BPE::from_file("data/gpt2-vocab.json", "data/gpt2-merges.txt")
.build()
.unwrap();
let tokenizer = create_gpt2_tokenizer(bpe);
let mut lines: Vec<EncodeInput> = vec![];
let mut batches: Vec<Vec<EncodeInput>> = vec![vec![]];
for line in BufReader::new(File::open(Path::new("data/big.txt")).unwrap()).lines() {
let line: EncodeInput = line.unwrap().into();
lines.push(line.clone());
if batches.last().unwrap().len() >= BATCH_SIZE {
batches.push(vec![]);
}
batches.last_mut().unwrap().push(line);
}
c.bench_function("BPE GPT2 encode", |b| {
b.iter_custom(|iters| iter_bench_encode(iters, tokenizer.deref(), &lines))
});
c.bench_function("BPE GPT2 encode batch", |b| {
b.iter_custom(|iters| iter_bench_encode_batch(iters, tokenizer.deref(), &batches))
});
let bpe = BPE::from_file("data/gpt2-vocab.json", "data/gpt2-merges.txt")
.cache_capacity(0)
.build()
.unwrap();
let tokenizer = create_gpt2_tokenizer(bpe);
c.bench_function("BPE GPT2 encode, no cache", |b| {
b.iter_custom(|iters| iter_bench_encode(iters, &tokenizer, &lines))
});
c.bench_function("BPE GPT2 encode batch, no cache", |b| {
b.iter_custom(|iters| iter_bench_encode_batch(iters, &tokenizer, &batches))
});
}
fn bench_train(c: &mut Criterion) {
let mut trainer: TrainerWrapper = BpeTrainerBuilder::default()
.show_progress(false)
.build()
.into();
let mut tokenizer = Tokenizer::new(BPE::default()).into_inner();
tokenizer.with_pre_tokenizer(Some(Whitespace {}));
c.bench_function("BPE Train vocabulary (small)", |b| {
b.iter_custom(|iters| {
iter_bench_train(
iters,
&mut tokenizer,
&mut trainer,
vec!["data/small.txt".to_string()],
)
})
});
let mut tokenizer = Tokenizer::new(BPE::default()).into_inner();
tokenizer.with_pre_tokenizer(Some(Whitespace {}));
c.bench_function("BPE Train vocabulary (big)", |b| {
b.iter_custom(|iters| {
iter_bench_train(
iters,
&mut tokenizer,
&mut trainer,
vec!["data/big.txt".to_string()],
)
})
});
}
criterion_group! {
name = benches;
config = Criterion::default().sample_size(20);
targets = bench_gpt2
}
criterion_group! {
name = benches_train;
config = Criterion::default().sample_size(10);
targets = bench_train
}
criterion_main!(benches, benches_train);
| tokenizers/tokenizers/benches/bpe_benchmark.rs/0 | {
"file_path": "tokenizers/tokenizers/benches/bpe_benchmark.rs",
"repo_id": "tokenizers",
"token_count": 1631
} |
use crate::tokenizer::{Decoder, Result};
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Clone, Debug, Serialize, Default)]
/// Strip is a simple trick which converts tokens looking like `<0x61>`
/// to pure bytes, and attempts to make them into a string. If the tokens
/// cannot be decoded you will get � instead for each inconvertible byte token
#[serde(tag = "type")]
#[non_exhaustive]
pub struct Strip {
pub content: char,
pub start: usize,
pub stop: usize,
}
impl Strip {
pub fn new(content: char, start: usize, stop: usize) -> Self {
Self {
content,
start,
stop,
}
}
}
impl Decoder for Strip {
fn decode_chain(&self, tokens: Vec<String>) -> Result<Vec<String>> {
Ok(tokens
.into_iter()
.map(|token| {
let chars: Vec<char> = token.chars().collect();
let mut start_cut = 0;
for (i, &c) in chars.iter().enumerate().take(self.start) {
if c == self.content {
start_cut = i + 1;
continue;
} else {
break;
}
}
let mut stop_cut = chars.len();
for i in 0..self.stop {
let index = chars.len() - i - 1;
if chars[index] == self.content {
stop_cut = index;
continue;
} else {
break;
}
}
let new_token: String = chars[start_cut..stop_cut].iter().collect();
new_token
})
.collect())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn decode() {
let decoder = Strip::new('H', 1, 0);
let res = decoder
.decode_chain(vec!["Hey".into(), " friend!".into(), "HHH".into()])
.unwrap();
assert_eq!(res, vec!["ey", " friend!", "HH"]);
let decoder = Strip::new('y', 0, 1);
let res = decoder
.decode_chain(vec!["Hey".into(), " friend!".into()])
.unwrap();
assert_eq!(res, vec!["He", " friend!"]);
}
}
| tokenizers/tokenizers/src/decoders/strip.rs/0 | {
"file_path": "tokenizers/tokenizers/src/decoders/strip.rs",
"repo_id": "tokenizers",
"token_count": 1217
} |
use super::{super::OrderedVocabIter, WordLevel, WordLevelBuilder};
use serde::{
de::{MapAccess, Visitor},
ser::SerializeStruct,
Deserialize, Deserializer, Serialize, Serializer,
};
use std::collections::HashSet;
impl Serialize for WordLevel {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut model = serializer.serialize_struct("WordLevel", 3)?;
let ordered_vocab = OrderedVocabIter::new(&self.vocab_r);
model.serialize_field("type", "WordLevel")?;
model.serialize_field("vocab", &ordered_vocab)?;
model.serialize_field("unk_token", &self.unk_token)?;
model.end()
}
}
impl<'de> Deserialize<'de> for WordLevel {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
deserializer.deserialize_struct(
"WordLevel",
&["type", "vocab", "unk_token"],
WordLevelVisitor,
)
}
}
struct WordLevelVisitor;
impl<'de> Visitor<'de> for WordLevelVisitor {
type Value = WordLevel;
fn expecting(&self, fmt: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(fmt, "struct WordLevel")
}
fn visit_map<V>(self, mut map: V) -> std::result::Result<Self::Value, V::Error>
where
V: MapAccess<'de>,
{
let mut builder = WordLevelBuilder::new();
let mut missing_fields = vec![
// for retrocompatibility the "type" field is not mandatory
"unk_token",
"vocab",
]
.into_iter()
.collect::<HashSet<_>>();
while let Some(key) = map.next_key::<String>()? {
match key.as_ref() {
"vocab" => builder = builder.vocab(map.next_value()?),
"unk_token" => builder = builder.unk_token(map.next_value()?),
"type" => match map.next_value()? {
"WordLevel" => {}
u => {
return Err(serde::de::Error::invalid_value(
serde::de::Unexpected::Str(u),
&"WordLevel",
))
}
},
_ => {}
}
missing_fields.remove::<str>(&key);
}
if !missing_fields.is_empty() {
Err(serde::de::Error::missing_field(
missing_fields.iter().next().unwrap(),
))
} else {
Ok(builder.build().map_err(serde::de::Error::custom)?)
}
}
}
#[cfg(test)]
mod tests {
use crate::models::wordlevel::{Vocab, WordLevel, WordLevelBuilder};
#[test]
fn serde() {
let wl = WordLevel::default();
let wl_s = r#"{"type":"WordLevel","vocab":{},"unk_token":"<unk>"}"#;
assert_eq!(serde_json::to_string(&wl).unwrap(), wl_s);
assert_eq!(serde_json::from_str::<WordLevel>(wl_s).unwrap(), wl);
}
#[test]
fn incomplete_vocab() {
let vocab: Vocab = [("<unk>".into(), 0), ("b".into(), 2)]
.iter()
.cloned()
.collect();
let wordlevel = WordLevelBuilder::default()
.vocab(vocab)
.unk_token("<unk>".to_string())
.build()
.unwrap();
let wl_s = r#"{"type":"WordLevel","vocab":{"<unk>":0,"b":2},"unk_token":"<unk>"}"#;
assert_eq!(serde_json::to_string(&wordlevel).unwrap(), wl_s);
assert_eq!(serde_json::from_str::<WordLevel>(wl_s).unwrap(), wordlevel);
}
#[test]
fn deserialization_should_fail() {
let missing_unk = r#"{"type":"WordLevel","vocab":{}}"#;
assert!(serde_json::from_str::<WordLevel>(missing_unk)
.unwrap_err()
.to_string()
.starts_with("missing field `unk_token`"));
let wrong_type = r#"{"type":"WordPiece","vocab":{}}"#;
assert!(serde_json::from_str::<WordLevel>(wrong_type)
.unwrap_err()
.to_string()
.starts_with("invalid value: string \"WordPiece\", expected WordLevel"));
}
}
| tokenizers/tokenizers/src/models/wordlevel/serialization.rs/0 | {
"file_path": "tokenizers/tokenizers/src/models/wordlevel/serialization.rs",
"repo_id": "tokenizers",
"token_count": 2084
} |
use serde::{Deserialize, Serialize};
use crate::tokenizer::{PreTokenizedString, PreTokenizer, Result, SplitDelimiterBehavior};
use crate::utils::macro_rules_attribute;
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[non_exhaustive]
#[macro_rules_attribute(impl_serde_type!)]
pub struct CharDelimiterSplit {
pub delimiter: char,
}
impl CharDelimiterSplit {
pub fn new(delimiter: char) -> Self {
Self { delimiter }
}
}
impl PreTokenizer for CharDelimiterSplit {
fn pre_tokenize(&self, pretokenized: &mut PreTokenizedString) -> Result<()> {
// TODO: Maybe add the option to specify the behavior
pretokenized.split(|_, normalized| {
normalized.split(self.delimiter, SplitDelimiterBehavior::Removed)
})
}
}
| tokenizers/tokenizers/src/pre_tokenizers/delimiter.rs/0 | {
"file_path": "tokenizers/tokenizers/src/pre_tokenizers/delimiter.rs",
"repo_id": "tokenizers",
"token_count": 296
} |
use super::{
normalizer::Range, Model, NormalizedString, Normalizer, Offsets, PreTokenizedString, Token,
};
use aho_corasick::{AhoCorasick, AhoCorasickBuilder, MatchKind};
use regex::Regex;
use serde::{ser::SerializeSeq, Deserialize, Serialize, Serializer};
use std::collections::{HashMap, HashSet};
/// Represent a token added by the user on top of the existing Model vocabulary.
/// AddedToken can be configured to specify the behavior they should have in various situations
/// like:
/// - Whether they should only match single words
/// - Whether to include any whitespace on its left or right
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct AddedToken {
/// The content of the added token
pub content: String,
/// Whether this token must be a single word or can break words
pub single_word: bool,
/// Whether this token should strip whitespaces on its left
pub lstrip: bool,
/// Whether this token should strip whitespaces on its right
pub rstrip: bool,
/// Whether this token should be normalized
pub normalized: bool,
/// Whether this token is special
pub special: bool,
}
impl AddedToken {
/// Build this token from the given content, specifying if it is intented to be a
/// special token. Special tokens are not normalized by default.
pub fn from<S: Into<String>>(content: S, special: bool) -> Self {
Self {
content: content.into(),
normalized: !special,
special,
..Default::default()
}
}
/// Specify whether this token should only match on whole single words, and never
/// part of a word.
#[must_use]
pub fn single_word(mut self, single_word: bool) -> Self {
self.single_word = single_word;
self
}
/// Specify whether this token should include all the whitespaces on its left, in
/// order to strip them out.
#[must_use]
pub fn lstrip(mut self, lstrip: bool) -> Self {
self.lstrip = lstrip;
self
}
/// Specify whether this token should include all the whitespaces on its right, in
/// order to strip them out.
#[must_use]
pub fn rstrip(mut self, rstrip: bool) -> Self {
self.rstrip = rstrip;
self
}
/// Specify whether this token should be normalized and match against its normalized
/// version in the input text.
#[must_use]
pub fn normalized(mut self, normalized: bool) -> Self {
self.normalized = normalized;
self
}
/// Specify whether this token is special, meaning if it should be skipped when decoding
#[must_use]
pub fn special(mut self, special: bool) -> Self {
self.special = special;
self
}
}
impl Default for AddedToken {
fn default() -> Self {
Self {
content: String::new(),
single_word: false,
lstrip: false,
rstrip: false,
normalized: true,
special: false,
}
}
}
// AddedTokens can be updated if value changed
impl std::hash::Hash for AddedToken {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.content.hash(state);
}
}
type MatchingSet = (AhoCorasick, Vec<u32>);
lazy_static! {
static ref STARTS_WITH_WORD: Regex = Regex::new(r"^\w").unwrap();
static ref ENDS_WITH_WORD: Regex = Regex::new(r"\w$").unwrap();
static ref RIGHTMOST_SPACE_AT_START: Regex = Regex::new(r"^\s*").unwrap();
static ref LEFTMOST_SPACE_AT_END: Regex = Regex::new(r"\s*$").unwrap();
}
fn ends_with_word(sentence: &str) -> bool {
ENDS_WITH_WORD.is_match(sentence)
}
fn starts_with_word(sentence: &str) -> bool {
STARTS_WITH_WORD.is_match(sentence)
}
fn space_leftmost_at_end(sentence: &str) -> usize {
if let Some(match_) = LEFTMOST_SPACE_AT_END.find(sentence) {
match_.start()
} else {
sentence.len()
}
}
fn space_rightmost_at_start(sentence: &str) -> usize {
if let Some(match_) = RIGHTMOST_SPACE_AT_START.find(sentence) {
match_.end()
} else {
0
}
}
///
/// A vocabulary built on top of the Model
///
/// This provides a way to add new vocabulary to a Tokenizer that has already been trained,
/// in a previous process, maybe by someone else. This is especially interesting in the case
/// of fine-tunings, where we want to finetune a model while adding some new functionalities
/// using some new special tokens, or maybe add some tokens in the case of unknown tokens, etc.
///
/// One of the reasons we need to handle these tokens outside of the model is simply that
/// for many models, it is not possible to add new tokens after the training process. For example,
/// using BPE, the training process generates merges pairs along the vocabulary, and any token
/// in the vocabulary can be decomposed in other tokens, down to the original alphabet. If we
/// were to add new tokens after this training process, we couldn't make sure the merges pairs
/// exist as required.
///
#[derive(Clone, Debug)]
pub struct AddedVocabulary {
/// Contains the mapping from String (token content) to ID. This map contains both special
/// tokens and classic added tokens that were added to the this vocabulary.
added_tokens_map: HashMap<String, u32>,
/// Contains the mapping from ID to AddedToken for all the added tokens, both special
/// and classic.
added_tokens_map_r: HashMap<u32, AddedToken>,
/// Contains only the classic AddedToken, in the specific order the user gave them.
added_tokens: Vec<AddedToken>,
/// Contains only the special AddedToken, in the specific order the user gave them.
special_tokens: Vec<AddedToken>,
/// A Set, containing all the special token for easy access while decoding. This let's
/// us remove them easily with an O(1) complexity.
special_tokens_set: HashSet<String>,
/// A RegexSet containing all the non-normalized patterns used to split on AddedTokens
split_trie: MatchingSet,
/// A RegexSet containing all the normalized patterns used to split on AddedTokens
split_normalized_trie: MatchingSet,
/// Whether or not special tokens should be splitted when encoding. This is equivalent to ignoring them
encode_special_tokens: bool,
}
impl AddedVocabulary {
pub fn new() -> Self {
let trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build::<_, &&[u8]>([])
.expect("The trie should build correctly");
let normalized_trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build::<_, &&[u8]>([])
.expect("The normalized trie should build correctly");
Self {
added_tokens_map: HashMap::new(),
added_tokens_map_r: HashMap::new(),
added_tokens: vec![],
special_tokens: vec![],
special_tokens_set: HashSet::new(),
split_trie: (trie, vec![]),
split_normalized_trie: (normalized_trie, vec![]),
encode_special_tokens: false,
}
}
/// Size of the additional vocabulary
#[allow(dead_code)] // Suppress the "method is never used" warning
pub fn len(&self) -> usize {
self.added_tokens_map.len()
}
/// Whether or not this vocabulary is empty
pub fn is_empty(&self) -> bool {
self.added_tokens_map.is_empty()
}
/// Get the additional vocabulary
pub fn get_vocab(&self) -> &HashMap<String, u32> {
&self.added_tokens_map
}
/// Get the additional vocabulary with the AddedTokens
pub fn get_added_tokens_decoder(&self) -> &HashMap<u32, AddedToken> {
&self.added_tokens_map_r
}
/// Get the id matching one of our token if it exists
pub fn token_to_id(&self, token: &str, model: &impl Model) -> Option<u32> {
self.added_tokens_map
.get(token)
.copied()
.or_else(|| model.token_to_id(token))
}
/// Get the token matching the given id if it exists
#[deprecated(
since = "0.19.0",
note = "please use `added_vocabulary.simple_id_to_token(id).or_else(|| model.id_to_token(id)` instead"
)]
pub fn id_to_token(&self, id: u32, model: &impl Model) -> Option<String> {
self.added_tokens_map_r
.get(&id)
.map(|t| t.content.clone())
.or_else(|| model.id_to_token(id))
}
pub fn simple_id_to_token(&self, id: u32) -> Option<String> {
self.added_tokens_map_r.get(&id).map(|t| t.content.clone())
}
//
pub fn set_encode_special_tokens(&mut self, value: bool) {
self.encode_special_tokens = value;
}
pub fn get_encode_special_tokens(&self) -> bool {
self.encode_special_tokens
}
/// Check if a token is a special token
pub fn is_special_token(&self, token: &str) -> bool {
self.special_tokens_set.contains(token)
}
/// Add some special tokens to the vocabulary
pub fn add_special_tokens<N: Normalizer>(
&mut self,
tokens: &[AddedToken],
model: &impl Model,
normalizer: Option<&N>,
) -> usize {
self.add_tokens(tokens, model, normalizer)
}
/// Add some tokens to the vocabulary
pub fn add_tokens<N: Normalizer>(
&mut self,
tokens: &[AddedToken],
model: &impl Model,
normalizer: Option<&N>,
) -> usize {
// Handle special tokens (if any)
for token in tokens {
if token.special
&& !token.content.is_empty()
&& !self.special_tokens_set.contains(&token.content)
{
self.special_tokens.push(token.to_owned());
self.special_tokens_set.insert(token.content.clone());
}
}
// Then we delegate to `add_tokens`, that will take care of refreshing added tokens too.
let mut ignored = 0;
for token in tokens {
if token.content.is_empty() || self.added_tokens_map_r.values().any(|val| val == token)
{
ignored += 1;
continue;
}
// If a token is already part of the vocabulary, we mark it as added
let new_id = if let Some(new_id) = self.token_to_id(&token.content, model) {
new_id
} else {
self.added_tokens_map.values().cloned().max().map_or(
model.get_vocab_size() as u32,
|max| {
if (max >= model.get_vocab_size() as u32) || model.get_vocab_size() == 0 {
max + 1
} else {
model.get_vocab_size() as u32
}
},
)
};
// Make sure we modify the previous entry
self.added_tokens_map
.entry(token.content.clone())
.and_modify(|old_id| *old_id = new_id)
.or_insert_with(|| new_id);
// Update the current revert operation
self.added_tokens_map_r
.entry(new_id)
.and_modify(|t| *t = token.clone())
.or_insert_with(|| token.clone());
// Make sure to remove previous entry (if the token gets a new id)
// Finally add the token to the classic set if special
if !self.special_tokens_set.contains(&token.content) {
self.added_tokens.push(token.clone());
}
}
self.refresh_added_tokens(model, normalizer);
// Return the number of added tokens
tokens.len() - ignored
}
/// Reconstruct our internal RegexSet when new tokens are added to the vocabulary.
///
/// We keep two different RegexSet, one that will take care of matching against the
/// non-normalized string, and one matching against the normalized one.
fn refresh_added_tokens<N: Normalizer>(&mut self, model: &impl Model, normalizer: Option<&N>) {
type TupleTokenId<'a> = (&'a AddedToken, u32);
let (normalized, non_normalized): (Vec<TupleTokenId>, Vec<TupleTokenId>) = self
.special_tokens
.iter()
.chain(self.added_tokens.iter())
.map(|token| {
(
token,
self.token_to_id(&token.content, model)
.expect("Missing additional token"),
)
})
.partition(|(token, _)| token.normalized);
let (tokens, ids): (Vec<&AddedToken>, Vec<u32>) = non_normalized.into_iter().unzip();
let trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build(tokens.iter().map(|token| &token.content))
.expect("Failed to build tried when refreshing tokens");
self.split_trie = (trie, ids);
let (ntokens, nids): (Vec<&AddedToken>, Vec<u32>) = normalized.into_iter().unzip();
let patterns: Vec<_> = ntokens
.iter()
.map(|token| {
let mut content = NormalizedString::from(token.content.as_ref());
if let Some(n) = normalizer {
n.normalize(&mut content).unwrap();
}
content
})
.collect();
let normalized_trie = AhoCorasickBuilder::new()
.match_kind(MatchKind::LeftmostLongest)
.build(patterns.iter().map(|content| content.get()))
.expect("Failed to build tried when refreshing tokens (normalized)");
self.split_normalized_trie = (normalized_trie, nids);
}
/// Find any AddedToken in the given sentence, using the provided MatchingSet.
/// This method returns a list "splits", each of them being a pair of Offsets
/// and an optional ID if it is an AddedToken.
/// The list of splits cover the entire input string.
fn find_matches(&self, sentence: &str, split_re: &MatchingSet) -> Vec<(Option<u32>, Offsets)> {
if sentence.is_empty() {
return vec![(None, (0, 0))];
}
let mut start_offset = 0;
let mut splits = vec![];
for mat in split_re.0.find_iter(sentence) {
let mut start = mat.start();
let mut stop = mat.end();
let aho_id = mat.pattern();
let id = split_re.1[aho_id];
let added_token = &self.added_tokens_map_r.get(&id).unwrap();
if self.encode_special_tokens && self.special_tokens_set.contains(&added_token.content)
{
continue;
}
if added_token.single_word {
let start_space = start == 0 || !ends_with_word(&sentence[..start]);
let stop_space = stop == sentence.len() || !starts_with_word(&sentence[stop..]);
if !stop_space || !start_space {
// Discard not single word
continue;
}
}
if added_token.lstrip {
// This will be strictly inferior to start and in correct sentence offset
let newstart = space_leftmost_at_end(&sentence[..start]);
// The previous match could have already matched those spaces
// Ignore them if it's already matched
start = std::cmp::max(newstart, start_offset);
}
if added_token.rstrip {
// This will starting a the stop+1 character, so we need
// to add the previous stop value
stop += space_rightmost_at_start(&sentence[stop..])
}
if start_offset < start {
splits.push((None, (start_offset, start)));
}
splits.push((Some(id), (start, stop)));
start_offset = stop;
}
let total_byte_len = sentence.len();
if start_offset != total_byte_len {
splits.push((None, (start_offset, total_byte_len)));
}
splits
}
/// Split the input sentence to extract anything we found from the `MatchingSet`, as well as
/// the list of corresponding IDs
/// The list of IDs have the exact same number of elements than the Iterator.
fn split_with_indices(
&self,
sentence: NormalizedString,
split_re: &MatchingSet,
) -> Vec<(NormalizedString, Option<Vec<Token>>)> {
self.find_matches(sentence.get(), split_re)
.into_iter()
.map(|(id, byte_offsets)| {
let slice = sentence
.slice(Range::Normalized(byte_offsets.0..byte_offsets.1))
.expect("AddedVocabulary bad split");
if let Some(id) = id {
let value = slice.get().to_owned();
let len = value.len();
(slice, Some(vec![Token::new(id, value, (0, len))]))
} else {
(slice, None)
}
})
.collect()
}
/// Extract the additional vocabulary from the given sentence, normalizing it along the way.
///
/// Some tokens should match against their normalized representation, as well as the
/// non-normalized one. For example, when we expect to extract the token `yesterday` in the
/// input sentence `I read a book Yesterday`, if the normalizer is supposed to lowercase
/// everything, we expect a match.
pub fn extract_and_normalize<N: Normalizer>(
&self,
normalizer: Option<&N>,
sequence: &str,
) -> PreTokenizedString {
let mut pretokenized: PreTokenizedString = sequence.into();
// 1. We extract all the non-normalized tokens from the non-normalized string
pretokenized
.split(|_, sequence| Ok(self.split_with_indices(sequence, &self.split_trie)))
.expect("AddedVocabulary bad split");
// <s> normalized = False
// "I read a book <s>Hey" -> "I read a book", " <s>", "Hey"
// </s> normalized = True -> "▁</s>"
// "I read a book</s>Hey" -> "I read a book</s>Hey"
// Day normalized = True -> "Day"
// "I read a book monday" -> "I read a book monday"
// [DAY] normalized = False -> "Day"
// "I read a [DAY] monday" -> "I read a " "[DAY]", "book monday"
// 320055
// 2. Then extract the normalized tokens from the normalized pieces of the string
pretokenized
.split(|_, mut sequence| {
normalizer.map(|n| n.normalize(&mut sequence));
Ok(self.split_with_indices(sequence, &self.split_normalized_trie))
})
.expect("AddedVocabulary bad split");
// ["I read a book", " <s>", "Hey"] -> ["▁I read a book", "▁ <s>", "▁Hey"]
// ["▁I read a book", "▁ <s>", "▁Hey"] -> [.., "▁ ", "<s>", "▁Hey"]
// </s> normalized = True -> "▁</s>"
// "I read a book</s>Hey" -> ["▁I read a book", "<","/","s",">", "Hey"]
// "I read a " "[DAY]", "book monday" -> "i read a " "[day]", "book monday"
pretokenized
}
}
impl Default for AddedVocabulary {
fn default() -> Self {
Self::new()
}
}
#[derive(Debug, Serialize, Deserialize)]
pub(super) struct AddedTokenWithId {
/// The id assigned to this token
pub id: u32,
#[serde(flatten)]
/// The target AddedToken
pub token: AddedToken,
}
impl Serialize for AddedVocabulary {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let mut added_tokens = self
.added_tokens_map_r
.iter()
.map(|(id, token)| AddedTokenWithId {
id: *id,
token: token.clone(),
})
.collect::<Vec<_>>();
// We need to have these added tokens ordered by ascending ID
added_tokens.sort_unstable_by_key(|o| o.id);
let mut vocabulary = serializer.serialize_seq(Some(added_tokens.len()))?;
for token in added_tokens {
vocabulary.serialize_element(&token)?;
}
vocabulary.end()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::normalizers::byte_level::ByteLevel as ByteLevelNormalizer;
use crate::normalizers::utils::Lowercase;
use crate::normalizers::NormalizerWrapper;
use crate::{OffsetReferential, OffsetType, Result, Token, Trainer};
use std::path::{Path, PathBuf};
#[derive(Serialize, Deserialize)]
struct ModelMock {
vocab: HashMap<String, u32>,
vocab_r: HashMap<u32, String>,
}
impl ModelMock {
pub fn new<I>(iter: I) -> Self
where
I: IntoIterator<Item = &'static (&'static str, u32)>,
{
let vocab: HashMap<String, u32> = iter
.into_iter()
.map(|&(tok, id)| (tok.to_string(), id))
.collect();
Self {
vocab_r: vocab
.iter()
.map(|(tok, id)| (*id, tok.to_owned()))
.collect(),
vocab,
}
}
}
fn simplify_output(result: &'_ PreTokenizedString) -> Vec<(&'_ str, Option<Vec<u32>>)> {
result
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, tokens)| {
(
s,
tokens
.as_ref()
.map(|t| t.iter().map(|t| t.id).collect::<Vec<_>>()),
)
})
.collect::<Vec<_>>()
}
struct TrainerMock;
impl Trainer for TrainerMock {
type Model = ModelMock;
fn should_show_progress(&self) -> bool {
true
}
fn train(&self, _model: &mut ModelMock) -> Result<Vec<AddedToken>> {
unimplemented!()
}
fn feed<I, S, F>(&mut self, _iterator: I, _process: F) -> Result<()>
where
I: Iterator<Item = S> + Send,
S: AsRef<str> + Send,
F: Fn(&str) -> Result<Vec<String>> + Sync,
{
unimplemented!()
}
}
impl Model for ModelMock {
type Trainer = TrainerMock;
fn tokenize(&self, _sequence: &str) -> Result<Vec<Token>> {
unimplemented!()
}
fn token_to_id(&self, token: &str) -> Option<u32> {
self.vocab.get(token).copied()
}
fn id_to_token(&self, id: u32) -> Option<String> {
self.vocab_r.get(&id).cloned()
}
fn get_vocab(&self) -> HashMap<String, u32> {
self.vocab.clone()
}
fn get_vocab_size(&self) -> usize {
self.vocab.len()
}
fn save(&self, _folder: &Path, _name: Option<&str>) -> Result<Vec<PathBuf>> {
unimplemented!()
}
fn get_trainer(&self) -> Self::Trainer {
TrainerMock
}
}
#[test]
fn can_add_tokens() {
let model = ModelMock::new(&[("test", 0), ("tost", 1)]);
let mut vocab = AddedVocabulary::new();
let normalizer: Option<&NormalizerWrapper> = None;
// Add tokens normally
assert_eq!(
vocab.add_tokens(
&[AddedToken::from("added_token_1", false)],
&model,
normalizer
),
1
);
let vocab_len: usize = vocab.len();
assert_eq!(vocab_len, 1);
// Does not add multiple time the same token
assert_eq!(
vocab.add_tokens(
&[
AddedToken::from("added_token_2", false),
AddedToken::from("added_token_2", false)
],
&model,
normalizer
),
1
);
assert_eq!(vocab.len(), 2);
// Also adds tokens already covered by the model
let added_token = AddedToken::from("test", false);
assert_eq!(
vocab.add_tokens(&[added_token.clone()], &model, normalizer),
1
);
assert_eq!(vocab.len(), 3);
assert_eq!(vocab.get_added_tokens_decoder()[&0], added_token);
}
#[test]
fn can_add_special_tokens() {
let model = ModelMock::new(&[("test", 0), ("tost", 1)]);
let mut vocab = AddedVocabulary::new();
let normalizer: Option<&NormalizerWrapper> = None;
// Add tokens normally
assert_eq!(
vocab.add_special_tokens(
&[AddedToken::from("added_token_1", true)],
&model,
normalizer
),
1
);
assert_eq!(vocab.len(), 1);
// Does not add multiple time the same token
assert_eq!(
vocab.add_special_tokens(
&[
AddedToken::from("added_token_2", true),
AddedToken::from("added_token_2", true)
],
&model,
normalizer
),
1
);
assert_eq!(vocab.len(), 2);
// Can add tokens already covered by the model
assert_eq!(
vocab.add_special_tokens(&[AddedToken::from("test", true)], &model, normalizer),
1
);
assert_eq!(vocab.len(), 3); // New token was added
assert!(vocab.is_special_token("test"));
assert_eq!(
*vocab.get_added_tokens_decoder(),
HashMap::from([
(0, AddedToken::from("test", true)),
(2, AddedToken::from("added_token_1", true)),
(3, AddedToken::from("added_token_2", true)),
])
);
assert!(vocab.added_tokens_map.contains_key("test"));
assert!(vocab.added_tokens_map_r.contains_key(&0));
vocab.add_tokens(
&[
AddedToken::from("tost", true),
AddedToken::from("another_two", false),
],
&model,
normalizer,
);
assert_eq!(vocab.len(), 5); // New token was added
assert_eq!(vocab.get_vocab()["another_two"], 4); // New token was added, but the index is not the length of the vocab
// Let's add an already added token again
assert_eq!(
vocab.add_special_tokens(&[AddedToken::from("another_two", true)], &model, normalizer),
1
);
assert_eq!(vocab.len(), 5); // Token was already there
assert_eq!(vocab.get_vocab()["another_two"], 4); // Token idx not changed
// Just checking that we can set the content of the string in rust
let mut token: AddedToken = AddedToken::from("Hey", false);
token.content = "hey".to_string();
assert_eq!(token.content, "hey"); // Token was already there
token.special = true;
assert!(token.special); // Token was already there
}
#[test]
fn can_extract_added_tokens() {
// Is able to extract both normal and special tokens
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer: Option<&NormalizerWrapper> = None;
vocab.add_tokens(
&[
AddedToken::from("my", false),
AddedToken::from("name", false),
],
&model,
normalizer,
);
vocab.add_special_tokens(
&[
AddedToken::from("[CLS]", true),
AddedToken::from("[SEP]", true),
],
&model,
normalizer,
);
let result = vocab.extract_and_normalize(normalizer, "[CLS] My name is Anthony [SEP]");
assert_eq!(
result
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, tokens)| (
s,
tokens
.as_ref()
.map(|t| t.iter().map(|t| t.id).collect::<Vec<_>>())
))
.collect::<Vec<_>>(),
vec![
("[CLS]", Some(vec![2])),
(" My ", None),
("name", Some(vec![1])),
(" is Anthony ", None),
("[SEP]", Some(vec![3]))
]
);
}
#[test]
fn options_use_cases() {
// Is able to extract both normal and special tokens, with various options (lstrip, rstrip,
// single_word, normalized)
let model = ModelMock::new(&[]);
let normalizer = Lowercase;
let mut vocab = AddedVocabulary::new();
vocab.add_tokens(
&[
AddedToken::from("my", false).lstrip(true).rstrip(true),
AddedToken::from("name", false),
AddedToken::from("ony", false).single_word(true),
],
&model,
Some(&normalizer),
);
vocab.add_special_tokens(
&[
AddedToken::from("[CLS]", true),
AddedToken::from("[SEP]", true),
],
&model,
Some(&normalizer),
);
let result =
vocab.extract_and_normalize(Some(&normalizer), "[CLS] My name is Anthony [SEP]");
assert_eq!(
simplify_output(&result),
vec![
("[CLS]", Some(vec![3])),
// This one includes both spaces because of the lstrip & rstrip
// And it matches because normalized == true
(" my ", Some(vec![0])),
("name", Some(vec![1])),
// `ony` is not extracted here thanks to single_word
(" is anthony ", None),
("[SEP]", Some(vec![4])),
]
);
}
#[test]
fn empty_matches() {
let vocab = AddedVocabulary::new();
let matches = vocab.find_matches("", &vocab.split_trie);
assert_eq!(matches, vec![(None, (0, 0))]);
}
#[test]
fn test_single_word_is_correct() {
// Is able to extract both normal and special tokens, with various options (lstrip, rstrip,
// single_word, normalized)
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
vocab.add_tokens(
&[AddedToken::from("<mask>", false).single_word(true)],
&model,
Some(&normalizer),
);
// Left, in the middle, non single world left, non single word right, end of sentence valid
let result = vocab.extract_and_normalize(
Some(&normalizer),
"<mask> My name <mask> A<mask> <mask>ony <mask>",
);
assert_eq!(
simplify_output(&result),
vec![
("<mask>", Some(vec![0])),
(" my name ", None),
("<mask>", Some(vec![0])),
(" a<mask> <mask>ony ", None),
("<mask>", Some(vec![0]))
]
);
}
#[test]
fn test_single_word_is_unicode_correct() {
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
assert_eq!(vocab.len(), 0);
vocab.add_tokens(
&[AddedToken::from("<mask>", false).single_word(true)],
&model,
Some(&normalizer),
);
let result = vocab.extract_and_normalize(Some(&normalizer), "<mask>, <mask>- ◌̰<mask>");
assert_eq!(
simplify_output(&result),
vec![
// Punctuation is not word
("<mask>", Some(vec![0])),
(", ", None),
// dash is not word
("<mask>", Some(vec![0])),
// This is unicode combining mark character and is word: https://en.wikipedia.org/wiki/Combining_Diacritical_Marks
("- ◌̰<mask>", None),
]
);
}
#[test]
fn test_lstrip_unicode_space() {
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
vocab.add_tokens(
&[AddedToken::from("<mask>", false)
.lstrip(true)
.rstrip(true)
.single_word(true)],
&model,
Some(&normalizer),
);
let result = vocab
.extract_and_normalize(Some(&normalizer), "Hi <mask> there\t<mask>\t<mask>\u{2000}");
assert_eq!(
simplify_output(&result),
vec![
("hi", None),
// Regular space
(" <mask> ", Some(vec![0])),
("there", None),
// \t is a spacing character
("\t<mask>\t", Some(vec![0])),
// Non overlapping
// \u{2000} is mongolian vowel separator: https://jkorpela.fi/chars/spaces.html
("<mask>\u{2000}", Some(vec![0])),
]
);
}
#[test]
fn test_encode_special_tokens() {
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let normalizer = Lowercase;
vocab.add_tokens(
&[
AddedToken::from("<mask>", true)
.lstrip(true)
.rstrip(true)
.single_word(true),
AddedToken::from("ask>", false),
AddedToken::from("<pad>", true),
],
&model,
Some(&normalizer),
);
vocab.set_encode_special_tokens(true);
let result = vocab.extract_and_normalize(
Some(&normalizer),
"Hi <mask> there\t<mask>\t<mask>\u{2000} <pad> <mask><pad><pad>",
);
assert_eq!(
simplify_output(&result),
vec![
("hi <m", None),
("ask>", Some(vec![1])),
(" there\t<m", None),
("ask>", Some(vec![1])),
("\t<m", None),
("ask>", Some(vec![1])),
("\u{2000} <pad> <m", None),
("ask>", Some(vec![1])),
("<pad><pad>", None)
]
);
vocab.set_encode_special_tokens(false);
let result = vocab.extract_and_normalize(
Some(&normalizer),
"Hi <mask> there\t<mask>\t<mask>\u{2000} <pad> <mask><pad><pad>",
);
assert_eq!(
simplify_output(&result),
vec![
("hi", None),
(" <mask> ", Some(vec![0])),
("there", None),
("\t<mask>\t", Some(vec![0])),
("<mask>\u{2000} ", Some(vec![0])),
("<pad>", Some(vec![2])),
(" <mask>", Some(vec![0])),
("<pad>", Some(vec![2])),
("<pad>", Some(vec![2]))
]
);
}
#[test]
fn byte_level_normalizer() {
// Is able to extract both normal and special tokens
let model = ModelMock::new(&[]);
let mut vocab = AddedVocabulary::new();
let from = NormalizerWrapper::from(ByteLevelNormalizer::new());
let normalizer: Option<&NormalizerWrapper> = Some(&from);
vocab.add_tokens(
&[AddedToken::from("my", false), AddedToken::from("今", false)],
&model,
normalizer,
);
let result = vocab.extract_and_normalize(normalizer, "my今");
assert_eq!(
result
.get_splits(OffsetReferential::Original, OffsetType::Byte)
.into_iter()
.map(|(s, _, tokens)| (
s,
tokens
.as_ref()
.map(|t| t.iter().map(|t| t.id).collect::<Vec<_>>())
))
.collect::<Vec<_>>(),
vec![("my", Some(vec![0])), ("ä»Ĭ", Some(vec![1])),]
);
}
}
| tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs/0 | {
"file_path": "tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs",
"repo_id": "tokenizers",
"token_count": 17733
} |
use crate::tokenizer::{Encoding, Result};
use serde::{Deserialize, Serialize};
use std::cmp;
use std::mem;
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize, Eq, Default)]
pub enum TruncationDirection {
Left,
#[default]
Right,
}
impl std::convert::AsRef<str> for TruncationDirection {
fn as_ref(&self) -> &str {
match self {
TruncationDirection::Left => "left",
TruncationDirection::Right => "right",
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TruncationParams {
#[serde(default)]
pub direction: TruncationDirection,
pub max_length: usize,
pub strategy: TruncationStrategy,
pub stride: usize,
}
impl Default for TruncationParams {
fn default() -> Self {
Self {
max_length: 512,
strategy: TruncationStrategy::default(),
stride: 0,
direction: TruncationDirection::default(),
}
}
}
#[derive(thiserror::Error, Debug)]
pub enum TruncationError {
/// We are supposed to truncate the pair sequence, but it has not been provided.
#[error("Truncation error: Second sequence not provided")]
SecondSequenceNotProvided,
/// We cannot truncate the target sequence enough to respect the provided max length.
#[error("Truncation error: Sequence to truncate too short to respect the provided max_length")]
SequenceTooShort,
}
#[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize, Eq)]
pub enum TruncationStrategy {
LongestFirst,
OnlyFirst,
OnlySecond,
}
impl Default for TruncationStrategy {
fn default() -> Self {
Self::LongestFirst
}
}
impl std::convert::AsRef<str> for TruncationStrategy {
fn as_ref(&self) -> &str {
match self {
Self::LongestFirst => "longest_first",
Self::OnlyFirst => "only_first",
Self::OnlySecond => "only_second",
}
}
}
pub fn truncate_encodings(
mut encoding: Encoding,
mut pair_encoding: Option<Encoding>,
params: &TruncationParams,
) -> Result<(Encoding, Option<Encoding>)> {
if params.max_length == 0 {
encoding.truncate(0, params.stride, params.direction);
if let Some(other_encoding) = pair_encoding.as_mut() {
other_encoding.truncate(0, params.stride, params.direction);
}
return Ok((encoding, pair_encoding));
}
let total_length = encoding.get_ids().len()
+ pair_encoding
.as_ref()
.map(|e| e.get_ids().len())
.unwrap_or(0);
let to_remove = if total_length > params.max_length {
total_length - params.max_length
} else {
return Ok((encoding, pair_encoding));
};
match params.strategy {
TruncationStrategy::LongestFirst => {
if let Some(other_encoding) = pair_encoding.as_mut() {
// Assuming n1 <= n2, there are 3 cases
// Case 1:
// No truncation needs to be performed.
// This scenario is handled before the match.
// Case 2:
// Only the longer input needs to be truncated.
// n1 = n1
// n2 = max_length - n1
// Case 3:
// Both inputs must be truncated.
// n1 = max_length / 2
// n2 = n1 + max_length % 2
let mut n1 = encoding.get_ids().len();
let mut n2 = other_encoding.get_ids().len();
let mut swap = false;
// Ensure n1 is the length of the shortest input
if n1 > n2 {
swap = true;
mem::swap(&mut n1, &mut n2);
}
if n1 > params.max_length {
// This needs to be a special case
// to avoid max_length - n1 < 0
// since n1 and n2 are unsigned
n2 = n1;
} else {
n2 = cmp::max(n1, params.max_length - n1);
}
if n1 + n2 > params.max_length {
n1 = params.max_length / 2;
n2 = n1 + params.max_length % 2;
}
// Swap lengths if we swapped previously
if swap {
mem::swap(&mut n1, &mut n2);
}
encoding.truncate(n1, params.stride, params.direction);
other_encoding.truncate(n2, params.stride, params.direction);
} else {
encoding.truncate(total_length - to_remove, params.stride, params.direction);
}
}
TruncationStrategy::OnlyFirst | TruncationStrategy::OnlySecond => {
let target = if params.strategy == TruncationStrategy::OnlyFirst {
Ok(&mut encoding)
} else if let Some(encoding) = pair_encoding.as_mut() {
Ok(encoding)
} else {
Err(Box::new(TruncationError::SecondSequenceNotProvided))
}?;
let target_len = target.get_ids().len();
if target_len > to_remove {
target.truncate(target_len - to_remove, params.stride, params.direction);
} else {
return Err(Box::new(TruncationError::SequenceTooShort));
}
}
}
Ok((encoding, pair_encoding))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::tokenizer::Encoding;
use std::collections::HashMap;
fn get_empty() -> Encoding {
Encoding::new(
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
vec![],
HashMap::new(),
)
}
fn get_short() -> Encoding {
Encoding::new(
vec![1, 2],
vec![0, 0],
vec![String::from("a"), String::from("b")],
vec![Some(0), Some(1)],
vec![(0, 1), (1, 2)],
vec![0, 0],
vec![1, 1],
vec![],
HashMap::new(),
)
}
fn get_medium() -> Encoding {
Encoding::new(
vec![3, 4, 5, 6],
vec![0, 0, 0, 0],
vec![
String::from("d"),
String::from("e"),
String::from("f"),
String::from("g"),
],
vec![Some(0), Some(1), Some(2), Some(3)],
vec![(0, 1), (1, 2), (2, 3), (3, 4)],
vec![0, 0, 0, 0],
vec![1, 1, 1, 1],
vec![],
HashMap::new(),
)
}
fn get_long() -> Encoding {
Encoding::new(
vec![7, 8, 9, 10, 11, 12, 13, 14],
vec![0, 0, 0, 0, 0, 0, 0, 0],
vec![
String::from("h"),
String::from("i"),
String::from("j"),
String::from("k"),
String::from("l"),
String::from("m"),
String::from("n"),
String::from("o"),
],
vec![
Some(0),
Some(1),
Some(2),
Some(3),
Some(4),
Some(5),
Some(6),
Some(7),
],
vec![
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(6, 7),
(6, 8),
],
vec![0, 0, 0, 0, 0, 0, 0, 0],
vec![1, 1, 1, 1, 1, 1, 1, 1],
vec![],
HashMap::new(),
)
}
fn truncate_and_assert(
encoding1: Encoding,
encoding2: Encoding,
params: &TruncationParams,
n1: usize,
n2: usize,
) {
match truncate_encodings(encoding1, Some(encoding2), params) {
Ok((e1, Some(e2))) => {
assert!(e1.get_ids().len() == n1);
assert!(e2.get_ids().len() == n2);
}
_ => panic!(),
};
}
#[test]
fn truncate_encodings_longest_first() {
let params = TruncationParams {
max_length: 7,
strategy: TruncationStrategy::LongestFirst,
stride: 0,
direction: TruncationDirection::Right,
};
truncate_and_assert(get_empty(), get_empty(), ¶ms, 0, 0);
truncate_and_assert(get_empty(), get_short(), ¶ms, 0, 2);
truncate_and_assert(get_empty(), get_medium(), ¶ms, 0, 4);
truncate_and_assert(get_empty(), get_long(), ¶ms, 0, 7);
truncate_and_assert(get_short(), get_empty(), ¶ms, 2, 0);
truncate_and_assert(get_short(), get_short(), ¶ms, 2, 2);
truncate_and_assert(get_short(), get_medium(), ¶ms, 2, 4);
truncate_and_assert(get_short(), get_long(), ¶ms, 2, 5);
truncate_and_assert(get_medium(), get_empty(), ¶ms, 4, 0);
truncate_and_assert(get_medium(), get_short(), ¶ms, 4, 2);
truncate_and_assert(get_medium(), get_medium(), ¶ms, 3, 4);
truncate_and_assert(get_medium(), get_long(), ¶ms, 3, 4);
truncate_and_assert(get_long(), get_empty(), ¶ms, 7, 0);
truncate_and_assert(get_long(), get_short(), ¶ms, 5, 2);
truncate_and_assert(get_long(), get_medium(), ¶ms, 4, 3);
truncate_and_assert(get_long(), get_long(), ¶ms, 3, 4);
}
#[test]
fn truncate_encodings_empty() {
let params = TruncationParams {
max_length: 0,
strategy: TruncationStrategy::LongestFirst,
stride: 0,
direction: TruncationDirection::Right,
};
truncate_and_assert(get_empty(), get_short(), ¶ms, 0, 0);
truncate_and_assert(get_medium(), get_medium(), ¶ms, 0, 0);
truncate_and_assert(get_long(), get_long(), ¶ms, 0, 0);
}
#[test]
fn test_deserialize_defaults() {
let old_truncation_params = r#"{"max_length":256,"strategy":"LongestFirst","stride":0}"#;
let params: TruncationParams = serde_json::from_str(old_truncation_params).unwrap();
assert_eq!(params.direction, TruncationDirection::Right);
}
}
| tokenizers/tokenizers/src/utils/truncation.rs/0 | {
"file_path": "tokenizers/tokenizers/src/utils/truncation.rs",
"repo_id": "tokenizers",
"token_count": 5471
} |
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:
### Settings
```javascript
import { env } from '@huggingface/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;
// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
```
For a full list of available settings, check out the [API Reference](./api/env).
### Convert your models to ONNX
We recommend using our [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [🤗 Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.
```bash
python -m scripts.convert --quantize --model_id <model_name_or_path>
```
For example, convert and quantize [bert-base-uncased](https://huggingface.co/bert-base-uncased) using:
```bash
python -m scripts.convert --quantize --model_id bert-base-uncased
```
This will save the following files to `./models/`:
```
bert-base-uncased/
├── config.json
├── tokenizer.json
├── tokenizer_config.json
└── onnx/
├── model.onnx
└── model_quantized.onnx
```
For the full list of supported architectures, see the [Optimum documentation](https://huggingface.co/docs/optimum/main/en/exporters/onnx/overview).
| transformers.js/docs/snippets/4_custom-usage.snippet/0 | {
"file_path": "transformers.js/docs/snippets/4_custom-usage.snippet",
"repo_id": "transformers.js",
"token_count": 588
} |
# Server-side Inference in Node.js
Although Transformers.js was originally designed to be used in the browser, it's also able to run inference on the server. In this tutorial, we will design a simple Node.js API that uses Transformers.js for sentiment analysis.
We'll also show you how to use the library in both CommonJS and ECMAScript modules, so you can choose the module system that works best for your project:
- [ECMAScript modules (ESM)](#ecmascript-modules-esm) - The official standard format
to package JavaScript code for reuse. It's the default module system in modern
browsers, with modules imported using `import` and exported using `export`.
Fortunately, starting with version 13.2.0, Node.js has stable support of ES modules.
- [CommonJS](#commonjs) - The default module system in Node.js. In this system,
modules are imported using `require()` and exported using `module.exports`.
<Tip>
Although you can always use the [Python library](https://github.com/huggingface/transformers) for server-side inference, using Transformers.js means that you can write all of your code in JavaScript (instead of having to set up and communicate with a separate Python process).
</Tip>
**Useful links:**
- Source code ([ESM](https://github.com/huggingface/transformers.js/tree/main/examples/node/esm/app.js) or [CommonJS](https://github.com/huggingface/transformers.js/tree/main/examples/node/commonjs/app.js))
- [Documentation](https://huggingface.co/docs/transformers.js)
## Prerequisites
- [Node.js](https://nodejs.org/en/) version 18+
- [npm](https://www.npmjs.com/) version 9+
## Getting started
Let's start by creating a new Node.js project and installing Transformers.js via [NPM](https://www.npmjs.com/package/@huggingface/transformers):
```bash
npm init -y
npm i @huggingface/transformers
```
Next, create a new file called `app.js`, which will be the entry point for our application. Depending on whether you're using [ECMAScript modules](#ecmascript-modules-esm) or [CommonJS](#commonjs), you will need to do some things differently (see below).
We'll also create a helper class called `MyClassificationPipeline` control the loading of the pipeline. It uses the [singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) to lazily create a single instance of the pipeline when `getInstance` is first called, and uses this pipeline for all subsequent calls:
### ECMAScript modules (ESM)
To indicate that your project uses ECMAScript modules, you need to add `"type": "module"` to your `package.json`:
```json
{
...
"type": "module",
...
}
```
Next, you will need to add the following imports to the top of `app.js`:
```javascript
import http from 'http';
import querystring from 'querystring';
import url from 'url';
```
Following that, let's import Transformers.js and define the `MyClassificationPipeline` class.
```javascript
import { pipeline, env } from '@huggingface/transformers';
class MyClassificationPipeline {
static task = 'text-classification';
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
// NOTE: Uncomment this to change the cache directory
// env.cacheDir = './.cache';
this.instance = pipeline(this.task, this.model, { progress_callback });
}
return this.instance;
}
}
```
### CommonJS
Start by adding the following imports to the top of `app.js`:
```javascript
const http = require('http');
const querystring = require('querystring');
const url = require('url');
```
Following that, let's import Transformers.js and define the `MyClassificationPipeline` class. Since Transformers.js is an ESM module, we will need to dynamically import the library using the [`import()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import) function:
```javascript
class MyClassificationPipeline {
static task = 'text-classification';
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
// Dynamically import the Transformers.js library
let { pipeline, env } = await import('@huggingface/transformers');
// NOTE: Uncomment this to change the cache directory
// env.cacheDir = './.cache';
this.instance = pipeline(this.task, this.model, { progress_callback });
}
return this.instance;
}
}
```
## Creating a basic HTTP server
Next, let's create a basic server with the built-in [HTTP](https://nodejs.org/api/http.html#http) module. We will listen for requests made to the server (using the `/classify` endpoint), extract the `text` query parameter, and run this through the pipeline.
```javascript
// Define the HTTP server
const server = http.createServer();
const hostname = '127.0.0.1';
const port = 3000;
// Listen for requests made to the server
server.on('request', async (req, res) => {
// Parse the request URL
const parsedUrl = url.parse(req.url);
// Extract the query parameters
const { text } = querystring.parse(parsedUrl.query);
// Set the response headers
res.setHeader('Content-Type', 'application/json');
let response;
if (parsedUrl.pathname === '/classify' && text) {
const classifier = await MyClassificationPipeline.getInstance();
response = await classifier(text);
res.statusCode = 200;
} else {
response = { 'error': 'Bad request' }
res.statusCode = 400;
}
// Send the JSON response
res.end(JSON.stringify(response));
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
```
<Tip>
Since we use lazy loading, the first request made to the server will also be responsible for loading the pipeline. If you would like to begin loading the pipeline as soon as the server starts running, you can add the following line of code after defining `MyClassificationPipeline`:
```javascript
MyClassificationPipeline.getInstance();
```
</Tip>
To start the server, run the following command:
```bash
node app.js
```
The server should be live at http://127.0.0.1:3000/, which you can visit in your web browser. You should see the following message:
```json
{"error":"Bad request"}
```
This is because we aren't targeting the `/classify` endpoint with a valid `text` query parameter. Let's try again, this time with a valid request. For example, you can visit http://127.0.0.1:3000/classify?text=I%20love%20Transformers.js and you should see:
```json
[{"label":"POSITIVE","score":0.9996721148490906}]
```
Great! We've successfully created a basic HTTP server that uses Transformers.js to classify text.
## (Optional) Customization
### Model caching
By default, the first time you run the application, it will download the model files and cache them on your file system (in `./node_modules/@huggingface/transformers/.cache/`). All subsequent requests will then use this model. You can change the location of the cache by setting `env.cacheDir`. For example, to cache the model in the `.cache` directory in the current working directory, you can add:
```javascript
env.cacheDir = './.cache';
```
### Use local models
If you want to use local model files, you can set `env.localModelPath` as follows:
```javascript
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
```
You can also disable loading of remote models by setting `env.allowRemoteModels` to `false`:
```javascript
// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;
```
| transformers.js/docs/source/tutorials/node.md/0 | {
"file_path": "transformers.js/docs/source/tutorials/node.md",
"repo_id": "transformers.js",
"token_count": 2271
} |
module.exports = {
env: { browser: true, es2020: true, 'node': true },
extends: [
'eslint:recommended',
'plugin:react/recommended',
'plugin:react/jsx-runtime',
'plugin:react-hooks/recommended',
],
parserOptions: { ecmaVersion: 'latest', sourceType: 'module' },
settings: { react: { version: '18.2' } },
plugins: ['react-refresh'],
rules: {
'react-refresh/only-export-components': 'warn',
'react/prop-types': 'off',
},
}
| transformers.js/examples/code-completion/.eslintrc.cjs/0 | {
"file_path": "transformers.js/examples/code-completion/.eslintrc.cjs",
"repo_id": "transformers.js",
"token_count": 179
} |
@charset "UTF-8";
/*!
* Start Bootstrap - Business Frontpage v5.0.7 (https://startbootstrap.com/template/business-frontpage)
* Copyright 2013-2021 Start Bootstrap
* Licensed under MIT (https://github.com/StartBootstrap/startbootstrap-business-frontpage/blob/master/LICENSE)
*/
/*!
* Bootstrap v5.1.3 (https://getbootstrap.com/)
* Copyright 2011-2021 The Bootstrap Authors
* Copyright 2011-2021 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
:root {
--bs-blue: #0d6efd;
--bs-indigo: #6610f2;
--bs-purple: #6f42c1;
--bs-pink: #d63384;
--bs-red: #dc3545;
--bs-orange: #fd7e14;
--bs-yellow: #ffc107;
--bs-green: #198754;
--bs-teal: #20c997;
--bs-cyan: #0dcaf0;
--bs-white: #fff;
--bs-gray: #6c757d;
--bs-gray-dark: #343a40;
--bs-gray-100: #f8f9fa;
--bs-gray-200: #e9ecef;
--bs-gray-300: #dee2e6;
--bs-gray-400: #ced4da;
--bs-gray-500: #adb5bd;
--bs-gray-600: #6c757d;
--bs-gray-700: #495057;
--bs-gray-800: #343a40;
--bs-gray-900: #212529;
--bs-primary: #0d6efd;
--bs-secondary: #6c757d;
--bs-success: #198754;
--bs-info: #0dcaf0;
--bs-warning: #ffc107;
--bs-danger: #dc3545;
--bs-light: #f8f9fa;
--bs-dark: #212529;
--bs-primary-rgb: 13, 110, 253;
--bs-secondary-rgb: 108, 117, 125;
--bs-success-rgb: 25, 135, 84;
--bs-info-rgb: 13, 202, 240;
--bs-warning-rgb: 255, 193, 7;
--bs-danger-rgb: 220, 53, 69;
--bs-light-rgb: 248, 249, 250;
--bs-dark-rgb: 33, 37, 41;
--bs-white-rgb: 255, 255, 255;
--bs-black-rgb: 0, 0, 0;
--bs-body-color-rgb: 33, 37, 41;
--bs-body-bg-rgb: 255, 255, 255;
--bs-font-sans-serif: system-ui, -apple-system, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", "Liberation Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
--bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
--bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0));
--bs-body-font-family: var(--bs-font-sans-serif);
--bs-body-font-size: 1rem;
--bs-body-font-weight: 400;
--bs-body-line-height: 1.5;
--bs-body-color: #212529;
--bs-body-bg: #fff;
}
*,
*::before,
*::after {
box-sizing: border-box;
}
@media (prefers-reduced-motion: no-preference) {
:root {
scroll-behavior: smooth;
}
}
body {
margin: 0;
font-family: var(--bs-body-font-family);
font-size: var(--bs-body-font-size);
font-weight: var(--bs-body-font-weight);
line-height: var(--bs-body-line-height);
color: var(--bs-body-color);
text-align: var(--bs-body-text-align);
background-color: var(--bs-body-bg);
-webkit-text-size-adjust: 100%;
-webkit-tap-highlight-color: rgba(0, 0, 0, 0);
}
hr {
margin: 1rem 0;
color: inherit;
background-color: currentColor;
border: 0;
opacity: 0.25;
}
hr:not([size]) {
height: 1px;
}
h6,
.h6,
h5,
.h5,
h4,
.h4,
h3,
.h3,
h2,
.h2,
h1,
.h1 {
margin-top: 0;
margin-bottom: 0.5rem;
font-weight: 500;
line-height: 1.2;
}
h1,
.h1 {
font-size: calc(1.375rem + 1.5vw);
}
@media (min-width: 1200px) {
h1,
.h1 {
font-size: 2.5rem;
}
}
h2,
.h2 {
font-size: calc(1.325rem + 0.9vw);
}
@media (min-width: 1200px) {
h2,
.h2 {
font-size: 2rem;
}
}
h3,
.h3 {
font-size: calc(1.3rem + 0.6vw);
}
@media (min-width: 1200px) {
h3,
.h3 {
font-size: 1.75rem;
}
}
h4,
.h4 {
font-size: calc(1.275rem + 0.3vw);
}
@media (min-width: 1200px) {
h4,
.h4 {
font-size: 1.5rem;
}
}
h5,
.h5 {
font-size: 1.25rem;
}
h6,
.h6 {
font-size: 1rem;
}
p {
margin-top: 0;
margin-bottom: 1rem;
}
abbr[title],
abbr[data-bs-original-title] {
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
cursor: help;
-webkit-text-decoration-skip-ink: none;
text-decoration-skip-ink: none;
}
address {
margin-bottom: 1rem;
font-style: normal;
line-height: inherit;
}
ol,
ul {
padding-left: 2rem;
}
ol,
ul,
dl {
margin-top: 0;
margin-bottom: 1rem;
}
ol ol,
ul ul,
ol ul,
ul ol {
margin-bottom: 0;
}
dt {
font-weight: 700;
}
dd {
margin-bottom: 0.5rem;
margin-left: 0;
}
blockquote {
margin: 0 0 1rem;
}
b,
strong {
font-weight: bolder;
}
small,
.small {
font-size: 0.875em;
}
mark,
.mark {
padding: 0.2em;
background-color: #fcf8e3;
}
sub,
sup {
position: relative;
font-size: 0.75em;
line-height: 0;
vertical-align: baseline;
}
sub {
bottom: -0.25em;
}
sup {
top: -0.5em;
}
a {
color: #0d6efd;
text-decoration: underline;
}
a:hover {
color: #0a58ca;
}
a:not([href]):not([class]),
a:not([href]):not([class]):hover {
color: inherit;
text-decoration: none;
}
pre,
code,
kbd,
samp {
font-family: var(--bs-font-monospace);
font-size: 1em;
direction: ltr
/* rtl:ignore */
;
unicode-bidi: bidi-override;
}
pre {
display: block;
margin-top: 0;
margin-bottom: 1rem;
overflow: auto;
font-size: 0.875em;
}
pre code {
font-size: inherit;
color: inherit;
word-break: normal;
}
code {
font-size: 0.875em;
color: #d63384;
word-wrap: break-word;
}
a>code {
color: inherit;
}
kbd {
padding: 0.2rem 0.4rem;
font-size: 0.875em;
color: #fff;
background-color: #212529;
border-radius: 0.2rem;
}
kbd kbd {
padding: 0;
font-size: 1em;
font-weight: 700;
}
figure {
margin: 0 0 1rem;
}
img,
svg {
vertical-align: middle;
}
table {
caption-side: bottom;
border-collapse: collapse;
}
caption {
padding-top: 0.5rem;
padding-bottom: 0.5rem;
color: #6c757d;
text-align: left;
}
th {
text-align: inherit;
text-align: -webkit-match-parent;
}
thead,
tbody,
tfoot,
tr,
td,
th {
border-color: inherit;
border-style: solid;
border-width: 0;
}
label {
display: inline-block;
}
button {
border-radius: 0;
}
button:focus:not(:focus-visible) {
outline: 0;
}
input,
button,
select,
optgroup,
textarea {
margin: 0;
font-family: inherit;
font-size: inherit;
line-height: inherit;
}
button,
select {
text-transform: none;
}
[role=button] {
cursor: pointer;
}
select {
word-wrap: normal;
}
select:disabled {
opacity: 1;
}
[list]::-webkit-calendar-picker-indicator {
display: none;
}
button,
[type=button],
[type=reset],
[type=submit] {
-webkit-appearance: button;
}
button:not(:disabled),
[type=button]:not(:disabled),
[type=reset]:not(:disabled),
[type=submit]:not(:disabled) {
cursor: pointer;
}
::-moz-focus-inner {
padding: 0;
border-style: none;
}
textarea {
resize: vertical;
}
fieldset {
min-width: 0;
padding: 0;
margin: 0;
border: 0;
}
legend {
float: left;
width: 100%;
padding: 0;
margin-bottom: 0.5rem;
font-size: calc(1.275rem + 0.3vw);
line-height: inherit;
}
@media (min-width: 1200px) {
legend {
font-size: 1.5rem;
}
}
legend+* {
clear: left;
}
::-webkit-datetime-edit-fields-wrapper,
::-webkit-datetime-edit-text,
::-webkit-datetime-edit-minute,
::-webkit-datetime-edit-hour-field,
::-webkit-datetime-edit-day-field,
::-webkit-datetime-edit-month-field,
::-webkit-datetime-edit-year-field {
padding: 0;
}
::-webkit-inner-spin-button {
height: auto;
}
[type=search] {
outline-offset: -2px;
-webkit-appearance: textfield;
}
/* rtl:raw:
[type="tel"],
[type="url"],
[type="email"],
[type="number"] {
direction: ltr;
}
*/
::-webkit-search-decoration {
-webkit-appearance: none;
}
::-webkit-color-swatch-wrapper {
padding: 0;
}
::-webkit-file-upload-button {
font: inherit;
}
::file-selector-button {
font: inherit;
}
::-webkit-file-upload-button {
font: inherit;
-webkit-appearance: button;
}
output {
display: inline-block;
}
iframe {
border: 0;
}
summary {
display: list-item;
cursor: pointer;
}
progress {
vertical-align: baseline;
}
[hidden] {
display: none !important;
}
.lead {
font-size: 1.25rem;
font-weight: 300;
}
.display-1 {
font-size: calc(1.625rem + 4.5vw);
font-weight: 300;
line-height: 1.2;
}
@media (min-width: 1200px) {
.display-1 {
font-size: 5rem;
}
}
.display-2 {
font-size: calc(1.575rem + 3.9vw);
font-weight: 300;
line-height: 1.2;
}
@media (min-width: 1200px) {
.display-2 {
font-size: 4.5rem;
}
}
.display-3 {
font-size: calc(1.525rem + 3.3vw);
font-weight: 300;
line-height: 1.2;
}
@media (min-width: 1200px) {
.display-3 {
font-size: 4rem;
}
}
.display-4 {
font-size: calc(1.475rem + 2.7vw);
font-weight: 300;
line-height: 1.2;
}
@media (min-width: 1200px) {
.display-4 {
font-size: 3.5rem;
}
}
.display-5 {
font-size: calc(1.425rem + 2.1vw);
font-weight: 300;
line-height: 1.2;
}
@media (min-width: 1200px) {
.display-5 {
font-size: 3rem;
}
}
.display-6 {
font-size: calc(1.375rem + 1.5vw);
font-weight: 300;
line-height: 1.2;
}
@media (min-width: 1200px) {
.display-6 {
font-size: 2.5rem;
}
}
.list-unstyled {
padding-left: 0;
list-style: none;
}
.list-inline {
padding-left: 0;
list-style: none;
}
.list-inline-item {
display: inline-block;
}
.list-inline-item:not(:last-child) {
margin-right: 0.5rem;
}
.initialism {
font-size: 0.875em;
text-transform: uppercase;
}
.blockquote {
margin-bottom: 1rem;
font-size: 1.25rem;
}
.blockquote> :last-child {
margin-bottom: 0;
}
.blockquote-footer {
margin-top: -1rem;
margin-bottom: 1rem;
font-size: 0.875em;
color: #6c757d;
}
.blockquote-footer::before {
content: "— ";
}
.img-fluid {
max-width: 100%;
height: auto;
}
.img-thumbnail {
padding: 0.25rem;
background-color: #fff;
border: 1px solid #dee2e6;
border-radius: 0.25rem;
max-width: 100%;
height: auto;
}
.figure {
display: inline-block;
}
.figure-img {
margin-bottom: 0.5rem;
line-height: 1;
}
.figure-caption {
font-size: 0.875em;
color: #6c757d;
}
.container,
.container-fluid,
.container-xxl,
.container-xl,
.container-lg,
.container-md,
.container-sm {
width: 100%;
padding-right: var(--bs-gutter-x, 0.75rem);
padding-left: var(--bs-gutter-x, 0.75rem);
margin-right: auto;
margin-left: auto;
}
@media (min-width: 576px) {
.container-sm,
.container {
max-width: 540px;
}
}
@media (min-width: 768px) {
.container-md,
.container-sm,
.container {
max-width: 720px;
}
}
@media (min-width: 992px) {
.container-lg,
.container-md,
.container-sm,
.container {
max-width: 960px;
}
}
@media (min-width: 1200px) {
.container-xl,
.container-lg,
.container-md,
.container-sm,
.container {
max-width: 1140px;
}
}
@media (min-width: 1400px) {
.container-xxl,
.container-xl,
.container-lg,
.container-md,
.container-sm,
.container {
max-width: 1320px;
}
}
.row {
--bs-gutter-x: 1.5rem;
--bs-gutter-y: 0;
display: flex;
flex-wrap: wrap;
margin-top: calc(-1 * var(--bs-gutter-y));
margin-right: calc(-0.5 * var(--bs-gutter-x));
margin-left: calc(-0.5 * var(--bs-gutter-x));
}
.row>* {
flex-shrink: 0;
width: 100%;
max-width: 100%;
padding-right: calc(var(--bs-gutter-x) * 0.5);
padding-left: calc(var(--bs-gutter-x) * 0.5);
margin-top: var(--bs-gutter-y);
}
.col {
flex: 1 0 0%;
}
.row-cols-auto>* {
flex: 0 0 auto;
width: auto;
}
.row-cols-1>* {
flex: 0 0 auto;
width: 100%;
}
.row-cols-2>* {
flex: 0 0 auto;
width: 50%;
}
.row-cols-3>* {
flex: 0 0 auto;
width: 33.3333333333%;
}
.row-cols-4>* {
flex: 0 0 auto;
width: 25%;
}
.row-cols-5>* {
flex: 0 0 auto;
width: 20%;
}
.row-cols-6>* {
flex: 0 0 auto;
width: 16.6666666667%;
}
.col-auto {
flex: 0 0 auto;
width: auto;
}
.col-1 {
flex: 0 0 auto;
width: 8.33333333%;
}
.col-2 {
flex: 0 0 auto;
width: 16.66666667%;
}
.col-3 {
flex: 0 0 auto;
width: 25%;
}
.col-4 {
flex: 0 0 auto;
width: 33.33333333%;
}
.col-5 {
flex: 0 0 auto;
width: 41.66666667%;
}
.col-6 {
flex: 0 0 auto;
width: 50%;
}
.col-7 {
flex: 0 0 auto;
width: 58.33333333%;
}
.col-8 {
flex: 0 0 auto;
width: 66.66666667%;
}
.col-9 {
flex: 0 0 auto;
width: 75%;
}
.col-10 {
flex: 0 0 auto;
width: 83.33333333%;
}
.col-11 {
flex: 0 0 auto;
width: 91.66666667%;
}
.col-12 {
flex: 0 0 auto;
width: 100%;
}
.offset-1 {
margin-left: 8.33333333%;
}
.offset-2 {
margin-left: 16.66666667%;
}
.offset-3 {
margin-left: 25%;
}
.offset-4 {
margin-left: 33.33333333%;
}
.offset-5 {
margin-left: 41.66666667%;
}
.offset-6 {
margin-left: 50%;
}
.offset-7 {
margin-left: 58.33333333%;
}
.offset-8 {
margin-left: 66.66666667%;
}
.offset-9 {
margin-left: 75%;
}
.offset-10 {
margin-left: 83.33333333%;
}
.offset-11 {
margin-left: 91.66666667%;
}
.g-0,
.gx-0 {
--bs-gutter-x: 0;
}
.g-0,
.gy-0 {
--bs-gutter-y: 0;
}
.g-1,
.gx-1 {
--bs-gutter-x: 0.25rem;
}
.g-1,
.gy-1 {
--bs-gutter-y: 0.25rem;
}
.g-2,
.gx-2 {
--bs-gutter-x: 0.5rem;
}
.g-2,
.gy-2 {
--bs-gutter-y: 0.5rem;
}
.g-3,
.gx-3 {
--bs-gutter-x: 1rem;
}
.g-3,
.gy-3 {
--bs-gutter-y: 1rem;
}
.g-4,
.gx-4 {
--bs-gutter-x: 1.5rem;
}
.g-4,
.gy-4 {
--bs-gutter-y: 1.5rem;
}
.g-5,
.gx-5 {
--bs-gutter-x: 3rem;
}
.g-5,
.gy-5 {
--bs-gutter-y: 3rem;
}
@media (min-width: 576px) {
.col-sm {
flex: 1 0 0%;
}
.row-cols-sm-auto>* {
flex: 0 0 auto;
width: auto;
}
.row-cols-sm-1>* {
flex: 0 0 auto;
width: 100%;
}
.row-cols-sm-2>* {
flex: 0 0 auto;
width: 50%;
}
.row-cols-sm-3>* {
flex: 0 0 auto;
width: 33.3333333333%;
}
.row-cols-sm-4>* {
flex: 0 0 auto;
width: 25%;
}
.row-cols-sm-5>* {
flex: 0 0 auto;
width: 20%;
}
.row-cols-sm-6>* {
flex: 0 0 auto;
width: 16.6666666667%;
}
.col-sm-auto {
flex: 0 0 auto;
width: auto;
}
.col-sm-1 {
flex: 0 0 auto;
width: 8.33333333%;
}
.col-sm-2 {
flex: 0 0 auto;
width: 16.66666667%;
}
.col-sm-3 {
flex: 0 0 auto;
width: 25%;
}
.col-sm-4 {
flex: 0 0 auto;
width: 33.33333333%;
}
.col-sm-5 {
flex: 0 0 auto;
width: 41.66666667%;
}
.col-sm-6 {
flex: 0 0 auto;
width: 50%;
}
.col-sm-7 {
flex: 0 0 auto;
width: 58.33333333%;
}
.col-sm-8 {
flex: 0 0 auto;
width: 66.66666667%;
}
.col-sm-9 {
flex: 0 0 auto;
width: 75%;
}
.col-sm-10 {
flex: 0 0 auto;
width: 83.33333333%;
}
.col-sm-11 {
flex: 0 0 auto;
width: 91.66666667%;
}
.col-sm-12 {
flex: 0 0 auto;
width: 100%;
}
.offset-sm-0 {
margin-left: 0;
}
.offset-sm-1 {
margin-left: 8.33333333%;
}
.offset-sm-2 {
margin-left: 16.66666667%;
}
.offset-sm-3 {
margin-left: 25%;
}
.offset-sm-4 {
margin-left: 33.33333333%;
}
.offset-sm-5 {
margin-left: 41.66666667%;
}
.offset-sm-6 {
margin-left: 50%;
}
.offset-sm-7 {
margin-left: 58.33333333%;
}
.offset-sm-8 {
margin-left: 66.66666667%;
}
.offset-sm-9 {
margin-left: 75%;
}
.offset-sm-10 {
margin-left: 83.33333333%;
}
.offset-sm-11 {
margin-left: 91.66666667%;
}
.g-sm-0,
.gx-sm-0 {
--bs-gutter-x: 0;
}
.g-sm-0,
.gy-sm-0 {
--bs-gutter-y: 0;
}
.g-sm-1,
.gx-sm-1 {
--bs-gutter-x: 0.25rem;
}
.g-sm-1,
.gy-sm-1 {
--bs-gutter-y: 0.25rem;
}
.g-sm-2,
.gx-sm-2 {
--bs-gutter-x: 0.5rem;
}
.g-sm-2,
.gy-sm-2 {
--bs-gutter-y: 0.5rem;
}
.g-sm-3,
.gx-sm-3 {
--bs-gutter-x: 1rem;
}
.g-sm-3,
.gy-sm-3 {
--bs-gutter-y: 1rem;
}
.g-sm-4,
.gx-sm-4 {
--bs-gutter-x: 1.5rem;
}
.g-sm-4,
.gy-sm-4 {
--bs-gutter-y: 1.5rem;
}
.g-sm-5,
.gx-sm-5 {
--bs-gutter-x: 3rem;
}
.g-sm-5,
.gy-sm-5 {
--bs-gutter-y: 3rem;
}
}
@media (min-width: 768px) {
.col-md {
flex: 1 0 0%;
}
.row-cols-md-auto>* {
flex: 0 0 auto;
width: auto;
}
.row-cols-md-1>* {
flex: 0 0 auto;
width: 100%;
}
.row-cols-md-2>* {
flex: 0 0 auto;
width: 50%;
}
.row-cols-md-3>* {
flex: 0 0 auto;
width: 33.3333333333%;
}
.row-cols-md-4>* {
flex: 0 0 auto;
width: 25%;
}
.row-cols-md-5>* {
flex: 0 0 auto;
width: 20%;
}
.row-cols-md-6>* {
flex: 0 0 auto;
width: 16.6666666667%;
}
.col-md-auto {
flex: 0 0 auto;
width: auto;
}
.col-md-1 {
flex: 0 0 auto;
width: 8.33333333%;
}
.col-md-2 {
flex: 0 0 auto;
width: 16.66666667%;
}
.col-md-3 {
flex: 0 0 auto;
width: 25%;
}
.col-md-4 {
flex: 0 0 auto;
width: 33.33333333%;
}
.col-md-5 {
flex: 0 0 auto;
width: 41.66666667%;
}
.col-md-6 {
flex: 0 0 auto;
width: 50%;
}
.col-md-7 {
flex: 0 0 auto;
width: 58.33333333%;
}
.col-md-8 {
flex: 0 0 auto;
width: 66.66666667%;
}
.col-md-9 {
flex: 0 0 auto;
width: 75%;
}
.col-md-10 {
flex: 0 0 auto;
width: 83.33333333%;
}
.col-md-11 {
flex: 0 0 auto;
width: 91.66666667%;
}
.col-md-12 {
flex: 0 0 auto;
width: 100%;
}
.offset-md-0 {
margin-left: 0;
}
.offset-md-1 {
margin-left: 8.33333333%;
}
.offset-md-2 {
margin-left: 16.66666667%;
}
.offset-md-3 {
margin-left: 25%;
}
.offset-md-4 {
margin-left: 33.33333333%;
}
.offset-md-5 {
margin-left: 41.66666667%;
}
.offset-md-6 {
margin-left: 50%;
}
.offset-md-7 {
margin-left: 58.33333333%;
}
.offset-md-8 {
margin-left: 66.66666667%;
}
.offset-md-9 {
margin-left: 75%;
}
.offset-md-10 {
margin-left: 83.33333333%;
}
.offset-md-11 {
margin-left: 91.66666667%;
}
.g-md-0,
.gx-md-0 {
--bs-gutter-x: 0;
}
.g-md-0,
.gy-md-0 {
--bs-gutter-y: 0;
}
.g-md-1,
.gx-md-1 {
--bs-gutter-x: 0.25rem;
}
.g-md-1,
.gy-md-1 {
--bs-gutter-y: 0.25rem;
}
.g-md-2,
.gx-md-2 {
--bs-gutter-x: 0.5rem;
}
.g-md-2,
.gy-md-2 {
--bs-gutter-y: 0.5rem;
}
.g-md-3,
.gx-md-3 {
--bs-gutter-x: 1rem;
}
.g-md-3,
.gy-md-3 {
--bs-gutter-y: 1rem;
}
.g-md-4,
.gx-md-4 {
--bs-gutter-x: 1.5rem;
}
.g-md-4,
.gy-md-4 {
--bs-gutter-y: 1.5rem;
}
.g-md-5,
.gx-md-5 {
--bs-gutter-x: 3rem;
}
.g-md-5,
.gy-md-5 {
--bs-gutter-y: 3rem;
}
}
@media (min-width: 992px) {
.col-lg {
flex: 1 0 0%;
}
.row-cols-lg-auto>* {
flex: 0 0 auto;
width: auto;
}
.row-cols-lg-1>* {
flex: 0 0 auto;
width: 100%;
}
.row-cols-lg-2>* {
flex: 0 0 auto;
width: 50%;
}
.row-cols-lg-3>* {
flex: 0 0 auto;
width: 33.3333333333%;
}
.row-cols-lg-4>* {
flex: 0 0 auto;
width: 25%;
}
.row-cols-lg-5>* {
flex: 0 0 auto;
width: 20%;
}
.row-cols-lg-6>* {
flex: 0 0 auto;
width: 16.6666666667%;
}
.col-lg-auto {
flex: 0 0 auto;
width: auto;
}
.col-lg-1 {
flex: 0 0 auto;
width: 8.33333333%;
}
.col-lg-2 {
flex: 0 0 auto;
width: 16.66666667%;
}
.col-lg-3 {
flex: 0 0 auto;
width: 25%;
}
.col-lg-4 {
flex: 0 0 auto;
width: 33.33333333%;
}
.col-lg-5 {
flex: 0 0 auto;
width: 41.66666667%;
}
.col-lg-6 {
flex: 0 0 auto;
width: 50%;
}
.col-lg-7 {
flex: 0 0 auto;
width: 58.33333333%;
}
.col-lg-8 {
flex: 0 0 auto;
width: 66.66666667%;
}
.col-lg-9 {
flex: 0 0 auto;
width: 75%;
}
.col-lg-10 {
flex: 0 0 auto;
width: 83.33333333%;
}
.col-lg-11 {
flex: 0 0 auto;
width: 91.66666667%;
}
.col-lg-12 {
flex: 0 0 auto;
width: 100%;
}
.offset-lg-0 {
margin-left: 0;
}
.offset-lg-1 {
margin-left: 8.33333333%;
}
.offset-lg-2 {
margin-left: 16.66666667%;
}
.offset-lg-3 {
margin-left: 25%;
}
.offset-lg-4 {
margin-left: 33.33333333%;
}
.offset-lg-5 {
margin-left: 41.66666667%;
}
.offset-lg-6 {
margin-left: 50%;
}
.offset-lg-7 {
margin-left: 58.33333333%;
}
.offset-lg-8 {
margin-left: 66.66666667%;
}
.offset-lg-9 {
margin-left: 75%;
}
.offset-lg-10 {
margin-left: 83.33333333%;
}
.offset-lg-11 {
margin-left: 91.66666667%;
}
.g-lg-0,
.gx-lg-0 {
--bs-gutter-x: 0;
}
.g-lg-0,
.gy-lg-0 {
--bs-gutter-y: 0;
}
.g-lg-1,
.gx-lg-1 {
--bs-gutter-x: 0.25rem;
}
.g-lg-1,
.gy-lg-1 {
--bs-gutter-y: 0.25rem;
}
.g-lg-2,
.gx-lg-2 {
--bs-gutter-x: 0.5rem;
}
.g-lg-2,
.gy-lg-2 {
--bs-gutter-y: 0.5rem;
}
.g-lg-3,
.gx-lg-3 {
--bs-gutter-x: 1rem;
}
.g-lg-3,
.gy-lg-3 {
--bs-gutter-y: 1rem;
}
.g-lg-4,
.gx-lg-4 {
--bs-gutter-x: 1.5rem;
}
.g-lg-4,
.gy-lg-4 {
--bs-gutter-y: 1.5rem;
}
.g-lg-5,
.gx-lg-5 {
--bs-gutter-x: 3rem;
}
.g-lg-5,
.gy-lg-5 {
--bs-gutter-y: 3rem;
}
}
@media (min-width: 1200px) {
.col-xl {
flex: 1 0 0%;
}
.row-cols-xl-auto>* {
flex: 0 0 auto;
width: auto;
}
.row-cols-xl-1>* {
flex: 0 0 auto;
width: 100%;
}
.row-cols-xl-2>* {
flex: 0 0 auto;
width: 50%;
}
.row-cols-xl-3>* {
flex: 0 0 auto;
width: 33.3333333333%;
}
.row-cols-xl-4>* {
flex: 0 0 auto;
width: 25%;
}
.row-cols-xl-5>* {
flex: 0 0 auto;
width: 20%;
}
.row-cols-xl-6>* {
flex: 0 0 auto;
width: 16.6666666667%;
}
.col-xl-auto {
flex: 0 0 auto;
width: auto;
}
.col-xl-1 {
flex: 0 0 auto;
width: 8.33333333%;
}
.col-xl-2 {
flex: 0 0 auto;
width: 16.66666667%;
}
.col-xl-3 {
flex: 0 0 auto;
width: 25%;
}
.col-xl-4 {
flex: 0 0 auto;
width: 33.33333333%;
}
.col-xl-5 {
flex: 0 0 auto;
width: 41.66666667%;
}
.col-xl-6 {
flex: 0 0 auto;
width: 50%;
}
.col-xl-7 {
flex: 0 0 auto;
width: 58.33333333%;
}
.col-xl-8 {
flex: 0 0 auto;
width: 66.66666667%;
}
.col-xl-9 {
flex: 0 0 auto;
width: 75%;
}
.col-xl-10 {
flex: 0 0 auto;
width: 83.33333333%;
}
.col-xl-11 {
flex: 0 0 auto;
width: 91.66666667%;
}
.col-xl-12 {
flex: 0 0 auto;
width: 100%;
}
.offset-xl-0 {
margin-left: 0;
}
.offset-xl-1 {
margin-left: 8.33333333%;
}
.offset-xl-2 {
margin-left: 16.66666667%;
}
.offset-xl-3 {
margin-left: 25%;
}
.offset-xl-4 {
margin-left: 33.33333333%;
}
.offset-xl-5 {
margin-left: 41.66666667%;
}
.offset-xl-6 {
margin-left: 50%;
}
.offset-xl-7 {
margin-left: 58.33333333%;
}
.offset-xl-8 {
margin-left: 66.66666667%;
}
.offset-xl-9 {
margin-left: 75%;
}
.offset-xl-10 {
margin-left: 83.33333333%;
}
.offset-xl-11 {
margin-left: 91.66666667%;
}
.g-xl-0,
.gx-xl-0 {
--bs-gutter-x: 0;
}
.g-xl-0,
.gy-xl-0 {
--bs-gutter-y: 0;
}
.g-xl-1,
.gx-xl-1 {
--bs-gutter-x: 0.25rem;
}
.g-xl-1,
.gy-xl-1 {
--bs-gutter-y: 0.25rem;
}
.g-xl-2,
.gx-xl-2 {
--bs-gutter-x: 0.5rem;
}
.g-xl-2,
.gy-xl-2 {
--bs-gutter-y: 0.5rem;
}
.g-xl-3,
.gx-xl-3 {
--bs-gutter-x: 1rem;
}
.g-xl-3,
.gy-xl-3 {
--bs-gutter-y: 1rem;
}
.g-xl-4,
.gx-xl-4 {
--bs-gutter-x: 1.5rem;
}
.g-xl-4,
.gy-xl-4 {
--bs-gutter-y: 1.5rem;
}
.g-xl-5,
.gx-xl-5 {
--bs-gutter-x: 3rem;
}
.g-xl-5,
.gy-xl-5 {
--bs-gutter-y: 3rem;
}
}
@media (min-width: 1400px) {
.col-xxl {
flex: 1 0 0%;
}
.row-cols-xxl-auto>* {
flex: 0 0 auto;
width: auto;
}
.row-cols-xxl-1>* {
flex: 0 0 auto;
width: 100%;
}
.row-cols-xxl-2>* {
flex: 0 0 auto;
width: 50%;
}
.row-cols-xxl-3>* {
flex: 0 0 auto;
width: 33.3333333333%;
}
.row-cols-xxl-4>* {
flex: 0 0 auto;
width: 25%;
}
.row-cols-xxl-5>* {
flex: 0 0 auto;
width: 20%;
}
.row-cols-xxl-6>* {
flex: 0 0 auto;
width: 16.6666666667%;
}
.col-xxl-auto {
flex: 0 0 auto;
width: auto;
}
.col-xxl-1 {
flex: 0 0 auto;
width: 8.33333333%;
}
.col-xxl-2 {
flex: 0 0 auto;
width: 16.66666667%;
}
.col-xxl-3 {
flex: 0 0 auto;
width: 25%;
}
.col-xxl-4 {
flex: 0 0 auto;
width: 33.33333333%;
}
.col-xxl-5 {
flex: 0 0 auto;
width: 41.66666667%;
}
.col-xxl-6 {
flex: 0 0 auto;
width: 50%;
}
.col-xxl-7 {
flex: 0 0 auto;
width: 58.33333333%;
}
.col-xxl-8 {
flex: 0 0 auto;
width: 66.66666667%;
}
.col-xxl-9 {
flex: 0 0 auto;
width: 75%;
}
.col-xxl-10 {
flex: 0 0 auto;
width: 83.33333333%;
}
.col-xxl-11 {
flex: 0 0 auto;
width: 91.66666667%;
}
.col-xxl-12 {
flex: 0 0 auto;
width: 100%;
}
.offset-xxl-0 {
margin-left: 0;
}
.offset-xxl-1 {
margin-left: 8.33333333%;
}
.offset-xxl-2 {
margin-left: 16.66666667%;
}
.offset-xxl-3 {
margin-left: 25%;
}
.offset-xxl-4 {
margin-left: 33.33333333%;
}
.offset-xxl-5 {
margin-left: 41.66666667%;
}
.offset-xxl-6 {
margin-left: 50%;
}
.offset-xxl-7 {
margin-left: 58.33333333%;
}
.offset-xxl-8 {
margin-left: 66.66666667%;
}
.offset-xxl-9 {
margin-left: 75%;
}
.offset-xxl-10 {
margin-left: 83.33333333%;
}
.offset-xxl-11 {
margin-left: 91.66666667%;
}
.g-xxl-0,
.gx-xxl-0 {
--bs-gutter-x: 0;
}
.g-xxl-0,
.gy-xxl-0 {
--bs-gutter-y: 0;
}
.g-xxl-1,
.gx-xxl-1 {
--bs-gutter-x: 0.25rem;
}
.g-xxl-1,
.gy-xxl-1 {
--bs-gutter-y: 0.25rem;
}
.g-xxl-2,
.gx-xxl-2 {
--bs-gutter-x: 0.5rem;
}
.g-xxl-2,
.gy-xxl-2 {
--bs-gutter-y: 0.5rem;
}
.g-xxl-3,
.gx-xxl-3 {
--bs-gutter-x: 1rem;
}
.g-xxl-3,
.gy-xxl-3 {
--bs-gutter-y: 1rem;
}
.g-xxl-4,
.gx-xxl-4 {
--bs-gutter-x: 1.5rem;
}
.g-xxl-4,
.gy-xxl-4 {
--bs-gutter-y: 1.5rem;
}
.g-xxl-5,
.gx-xxl-5 {
--bs-gutter-x: 3rem;
}
.g-xxl-5,
.gy-xxl-5 {
--bs-gutter-y: 3rem;
}
}
.table {
--bs-table-bg: transparent;
--bs-table-accent-bg: transparent;
--bs-table-striped-color: #212529;
--bs-table-striped-bg: rgba(0, 0, 0, 0.05);
--bs-table-active-color: #212529;
--bs-table-active-bg: rgba(0, 0, 0, 0.1);
--bs-table-hover-color: #212529;
--bs-table-hover-bg: rgba(0, 0, 0, 0.075);
width: 100%;
margin-bottom: 1rem;
color: #212529;
vertical-align: top;
border-color: #dee2e6;
}
.table> :not(caption)>*>* {
padding: 0.5rem 0.5rem;
background-color: var(--bs-table-bg);
border-bottom-width: 1px;
box-shadow: inset 0 0 0 9999px var(--bs-table-accent-bg);
}
.table>tbody {
vertical-align: inherit;
}
.table>thead {
vertical-align: bottom;
}
.table> :not(:first-child) {
border-top: 2px solid currentColor;
}
.caption-top {
caption-side: top;
}
.table-sm> :not(caption)>*>* {
padding: 0.25rem 0.25rem;
}
.table-bordered> :not(caption)>* {
border-width: 1px 0;
}
.table-bordered> :not(caption)>*>* {
border-width: 0 1px;
}
.table-borderless> :not(caption)>*>* {
border-bottom-width: 0;
}
.table-borderless> :not(:first-child) {
border-top-width: 0;
}
.table-striped>tbody>tr:nth-of-type(odd)>* {
--bs-table-accent-bg: var(--bs-table-striped-bg);
color: var(--bs-table-striped-color);
}
.table-active {
--bs-table-accent-bg: var(--bs-table-active-bg);
color: var(--bs-table-active-color);
}
.table-hover>tbody>tr:hover>* {
--bs-table-accent-bg: var(--bs-table-hover-bg);
color: var(--bs-table-hover-color);
}
.table-primary {
--bs-table-bg: #cfe2ff;
--bs-table-striped-bg: #c5d7f2;
--bs-table-striped-color: #000;
--bs-table-active-bg: #bacbe6;
--bs-table-active-color: #000;
--bs-table-hover-bg: #bfd1ec;
--bs-table-hover-color: #000;
color: #000;
border-color: #bacbe6;
}
.table-secondary {
--bs-table-bg: #e2e3e5;
--bs-table-striped-bg: #d7d8da;
--bs-table-striped-color: #000;
--bs-table-active-bg: #cbccce;
--bs-table-active-color: #000;
--bs-table-hover-bg: #d1d2d4;
--bs-table-hover-color: #000;
color: #000;
border-color: #cbccce;
}
.table-success {
--bs-table-bg: #d1e7dd;
--bs-table-striped-bg: #c7dbd2;
--bs-table-striped-color: #000;
--bs-table-active-bg: #bcd0c7;
--bs-table-active-color: #000;
--bs-table-hover-bg: #c1d6cc;
--bs-table-hover-color: #000;
color: #000;
border-color: #bcd0c7;
}
.table-info {
--bs-table-bg: #cff4fc;
--bs-table-striped-bg: #c5e8ef;
--bs-table-striped-color: #000;
--bs-table-active-bg: #badce3;
--bs-table-active-color: #000;
--bs-table-hover-bg: #bfe2e9;
--bs-table-hover-color: #000;
color: #000;
border-color: #badce3;
}
.table-warning {
--bs-table-bg: #fff3cd;
--bs-table-striped-bg: #f2e7c3;
--bs-table-striped-color: #000;
--bs-table-active-bg: #e6dbb9;
--bs-table-active-color: #000;
--bs-table-hover-bg: #ece1be;
--bs-table-hover-color: #000;
color: #000;
border-color: #e6dbb9;
}
.table-danger {
--bs-table-bg: #f8d7da;
--bs-table-striped-bg: #eccccf;
--bs-table-striped-color: #000;
--bs-table-active-bg: #dfc2c4;
--bs-table-active-color: #000;
--bs-table-hover-bg: #e5c7ca;
--bs-table-hover-color: #000;
color: #000;
border-color: #dfc2c4;
}
.table-light {
--bs-table-bg: #f8f9fa;
--bs-table-striped-bg: #ecedee;
--bs-table-striped-color: #000;
--bs-table-active-bg: #dfe0e1;
--bs-table-active-color: #000;
--bs-table-hover-bg: #e5e6e7;
--bs-table-hover-color: #000;
color: #000;
border-color: #dfe0e1;
}
.table-dark {
--bs-table-bg: #212529;
--bs-table-striped-bg: #2c3034;
--bs-table-striped-color: #fff;
--bs-table-active-bg: #373b3e;
--bs-table-active-color: #fff;
--bs-table-hover-bg: #323539;
--bs-table-hover-color: #fff;
color: #fff;
border-color: #373b3e;
}
.table-responsive {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
@media (max-width: 575.98px) {
.table-responsive-sm {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
}
@media (max-width: 767.98px) {
.table-responsive-md {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
}
@media (max-width: 991.98px) {
.table-responsive-lg {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
}
@media (max-width: 1199.98px) {
.table-responsive-xl {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
}
@media (max-width: 1399.98px) {
.table-responsive-xxl {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
}
.form-label {
margin-bottom: 0.5rem;
}
.col-form-label {
padding-top: calc(0.375rem + 1px);
padding-bottom: calc(0.375rem + 1px);
margin-bottom: 0;
font-size: inherit;
line-height: 1.5;
}
.col-form-label-lg {
padding-top: calc(0.5rem + 1px);
padding-bottom: calc(0.5rem + 1px);
font-size: 1.25rem;
}
.col-form-label-sm {
padding-top: calc(0.25rem + 1px);
padding-bottom: calc(0.25rem + 1px);
font-size: 0.875rem;
}
.form-text {
margin-top: 0.25rem;
font-size: 0.875em;
color: #6c757d;
}
.form-control {
display: block;
width: 100%;
padding: 0.375rem 0.75rem;
font-size: 1rem;
font-weight: 400;
line-height: 1.5;
color: #212529;
background-color: #fff;
background-clip: padding-box;
border: 1px solid #ced4da;
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
border-radius: 0.25rem;
transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.form-control {
transition: none;
}
}
.form-control[type=file] {
overflow: hidden;
}
.form-control[type=file]:not(:disabled):not([readonly]) {
cursor: pointer;
}
.form-control:focus {
color: #212529;
background-color: #fff;
border-color: #86b7fe;
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.form-control::-webkit-date-and-time-value {
height: 1.5em;
}
.form-control::-moz-placeholder {
color: #6c757d;
opacity: 1;
}
.form-control:-ms-input-placeholder {
color: #6c757d;
opacity: 1;
}
.form-control::placeholder {
color: #6c757d;
opacity: 1;
}
.form-control:disabled,
.form-control[readonly] {
background-color: #e9ecef;
opacity: 1;
}
.form-control::-webkit-file-upload-button {
padding: 0.375rem 0.75rem;
margin: -0.375rem -0.75rem;
-webkit-margin-end: 0.75rem;
margin-inline-end: 0.75rem;
color: #212529;
background-color: #e9ecef;
pointer-events: none;
border-color: inherit;
border-style: solid;
border-width: 0;
border-inline-end-width: 1px;
border-radius: 0;
-webkit-transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
.form-control::file-selector-button {
padding: 0.375rem 0.75rem;
margin: -0.375rem -0.75rem;
-webkit-margin-end: 0.75rem;
margin-inline-end: 0.75rem;
color: #212529;
background-color: #e9ecef;
pointer-events: none;
border-color: inherit;
border-style: solid;
border-width: 0;
border-inline-end-width: 1px;
border-radius: 0;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.form-control::-webkit-file-upload-button {
-webkit-transition: none;
transition: none;
}
.form-control::file-selector-button {
transition: none;
}
}
.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button {
background-color: #dde0e3;
}
.form-control:hover:not(:disabled):not([readonly])::file-selector-button {
background-color: #dde0e3;
}
.form-control::-webkit-file-upload-button {
padding: 0.375rem 0.75rem;
margin: -0.375rem -0.75rem;
-webkit-margin-end: 0.75rem;
margin-inline-end: 0.75rem;
color: #212529;
background-color: #e9ecef;
pointer-events: none;
border-color: inherit;
border-style: solid;
border-width: 0;
border-inline-end-width: 1px;
border-radius: 0;
-webkit-transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.form-control::-webkit-file-upload-button {
-webkit-transition: none;
transition: none;
}
}
.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button {
background-color: #dde0e3;
}
.form-control-plaintext {
display: block;
width: 100%;
padding: 0.375rem 0;
margin-bottom: 0;
line-height: 1.5;
color: #212529;
background-color: transparent;
border: solid transparent;
border-width: 1px 0;
}
.form-control-plaintext.form-control-sm,
.form-control-plaintext.form-control-lg {
padding-right: 0;
padding-left: 0;
}
.form-control-sm {
min-height: calc(1.5em + 0.5rem + 2px);
padding: 0.25rem 0.5rem;
font-size: 0.875rem;
border-radius: 0.2rem;
}
.form-control-sm::-webkit-file-upload-button {
padding: 0.25rem 0.5rem;
margin: -0.25rem -0.5rem;
-webkit-margin-end: 0.5rem;
margin-inline-end: 0.5rem;
}
.form-control-sm::file-selector-button {
padding: 0.25rem 0.5rem;
margin: -0.25rem -0.5rem;
-webkit-margin-end: 0.5rem;
margin-inline-end: 0.5rem;
}
.form-control-sm::-webkit-file-upload-button {
padding: 0.25rem 0.5rem;
margin: -0.25rem -0.5rem;
-webkit-margin-end: 0.5rem;
margin-inline-end: 0.5rem;
}
.form-control-lg {
min-height: calc(1.5em + 1rem + 2px);
padding: 0.5rem 1rem;
font-size: 1.25rem;
border-radius: 0.3rem;
}
.form-control-lg::-webkit-file-upload-button {
padding: 0.5rem 1rem;
margin: -0.5rem -1rem;
-webkit-margin-end: 1rem;
margin-inline-end: 1rem;
}
.form-control-lg::file-selector-button {
padding: 0.5rem 1rem;
margin: -0.5rem -1rem;
-webkit-margin-end: 1rem;
margin-inline-end: 1rem;
}
.form-control-lg::-webkit-file-upload-button {
padding: 0.5rem 1rem;
margin: -0.5rem -1rem;
-webkit-margin-end: 1rem;
margin-inline-end: 1rem;
}
textarea.form-control {
min-height: calc(1.5em + 0.75rem + 2px);
}
textarea.form-control-sm {
min-height: calc(1.5em + 0.5rem + 2px);
}
textarea.form-control-lg {
min-height: calc(1.5em + 1rem + 2px);
}
.form-control-color {
width: 3rem;
height: auto;
padding: 0.375rem;
}
.form-control-color:not(:disabled):not([readonly]) {
cursor: pointer;
}
.form-control-color::-moz-color-swatch {
height: 1.5em;
border-radius: 0.25rem;
}
.form-control-color::-webkit-color-swatch {
height: 1.5em;
border-radius: 0.25rem;
}
.form-select {
display: block;
width: 100%;
padding: 0.375rem 2.25rem 0.375rem 0.75rem;
-moz-padding-start: calc(0.75rem - 3px);
font-size: 1rem;
font-weight: 400;
line-height: 1.5;
color: #212529;
background-color: #fff;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e");
background-repeat: no-repeat;
background-position: right 0.75rem center;
background-size: 16px 12px;
border: 1px solid #ced4da;
border-radius: 0.25rem;
transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
}
@media (prefers-reduced-motion: reduce) {
.form-select {
transition: none;
}
}
.form-select:focus {
border-color: #86b7fe;
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.form-select[multiple],
.form-select[size]:not([size="1"]) {
padding-right: 0.75rem;
background-image: none;
}
.form-select:disabled {
background-color: #e9ecef;
}
.form-select:-moz-focusring {
color: transparent;
text-shadow: 0 0 0 #212529;
}
.form-select-sm {
padding-top: 0.25rem;
padding-bottom: 0.25rem;
padding-left: 0.5rem;
font-size: 0.875rem;
border-radius: 0.2rem;
}
.form-select-lg {
padding-top: 0.5rem;
padding-bottom: 0.5rem;
padding-left: 1rem;
font-size: 1.25rem;
border-radius: 0.3rem;
}
.form-check {
display: block;
min-height: 1.5rem;
padding-left: 1.5em;
margin-bottom: 0.125rem;
}
.form-check .form-check-input {
float: left;
margin-left: -1.5em;
}
.form-check-input {
width: 1em;
height: 1em;
margin-top: 0.25em;
vertical-align: top;
background-color: #fff;
background-repeat: no-repeat;
background-position: center;
background-size: contain;
border: 1px solid rgba(0, 0, 0, 0.25);
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
-webkit-print-color-adjust: exact;
color-adjust: exact;
}
.form-check-input[type=checkbox] {
border-radius: 0.25em;
}
.form-check-input[type=radio] {
border-radius: 50%;
}
.form-check-input:active {
filter: brightness(90%);
}
.form-check-input:focus {
border-color: #86b7fe;
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.form-check-input:checked {
background-color: #0d6efd;
border-color: #0d6efd;
}
.form-check-input:checked[type=checkbox] {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10l3 3l6-6'/%3e%3c/svg%3e");
}
.form-check-input:checked[type=radio] {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e");
}
.form-check-input[type=checkbox]:indeterminate {
background-color: #0d6efd;
border-color: #0d6efd;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e");
}
.form-check-input:disabled {
pointer-events: none;
filter: none;
opacity: 0.5;
}
.form-check-input[disabled]~.form-check-label,
.form-check-input:disabled~.form-check-label {
opacity: 0.5;
}
.form-switch {
padding-left: 2.5em;
}
.form-switch .form-check-input {
width: 2em;
margin-left: -2.5em;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e");
background-position: left center;
border-radius: 2em;
transition: background-position 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.form-switch .form-check-input {
transition: none;
}
}
.form-switch .form-check-input:focus {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2386b7fe'/%3e%3c/svg%3e");
}
.form-switch .form-check-input:checked {
background-position: right center;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e");
}
.form-check-inline {
display: inline-block;
margin-right: 1rem;
}
.btn-check {
position: absolute;
clip: rect(0, 0, 0, 0);
pointer-events: none;
}
.btn-check[disabled]+.btn,
.btn-check:disabled+.btn {
pointer-events: none;
filter: none;
opacity: 0.65;
}
.form-range {
width: 100%;
height: 1.5rem;
padding: 0;
background-color: transparent;
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
}
.form-range:focus {
outline: 0;
}
.form-range:focus::-webkit-slider-thumb {
box-shadow: 0 0 0 1px #fff, 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.form-range:focus::-moz-range-thumb {
box-shadow: 0 0 0 1px #fff, 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.form-range::-moz-focus-outer {
border: 0;
}
.form-range::-webkit-slider-thumb {
width: 1rem;
height: 1rem;
margin-top: -0.25rem;
background-color: #0d6efd;
border: 0;
border-radius: 1rem;
-webkit-transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
-webkit-appearance: none;
appearance: none;
}
@media (prefers-reduced-motion: reduce) {
.form-range::-webkit-slider-thumb {
-webkit-transition: none;
transition: none;
}
}
.form-range::-webkit-slider-thumb:active {
background-color: #b6d4fe;
}
.form-range::-webkit-slider-runnable-track {
width: 100%;
height: 0.5rem;
color: transparent;
cursor: pointer;
background-color: #dee2e6;
border-color: transparent;
border-radius: 1rem;
}
.form-range::-moz-range-thumb {
width: 1rem;
height: 1rem;
background-color: #0d6efd;
border: 0;
border-radius: 1rem;
-moz-transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
transition: background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
-moz-appearance: none;
appearance: none;
}
@media (prefers-reduced-motion: reduce) {
.form-range::-moz-range-thumb {
-moz-transition: none;
transition: none;
}
}
.form-range::-moz-range-thumb:active {
background-color: #b6d4fe;
}
.form-range::-moz-range-track {
width: 100%;
height: 0.5rem;
color: transparent;
cursor: pointer;
background-color: #dee2e6;
border-color: transparent;
border-radius: 1rem;
}
.form-range:disabled {
pointer-events: none;
}
.form-range:disabled::-webkit-slider-thumb {
background-color: #adb5bd;
}
.form-range:disabled::-moz-range-thumb {
background-color: #adb5bd;
}
.form-floating {
position: relative;
}
.form-floating>.form-control,
.form-floating>.form-select {
height: calc(3.5rem + 2px);
line-height: 1.25;
}
.form-floating>label {
position: absolute;
top: 0;
left: 0;
height: 100%;
padding: 1rem 0.75rem;
pointer-events: none;
border: 1px solid transparent;
transform-origin: 0 0;
transition: opacity 0.1s ease-in-out, transform 0.1s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.form-floating>label {
transition: none;
}
}
.form-floating>.form-control {
padding: 1rem 0.75rem;
}
.form-floating>.form-control::-moz-placeholder {
color: transparent;
}
.form-floating>.form-control:-ms-input-placeholder {
color: transparent;
}
.form-floating>.form-control::placeholder {
color: transparent;
}
.form-floating>.form-control:not(:-moz-placeholder-shown) {
padding-top: 1.625rem;
padding-bottom: 0.625rem;
}
.form-floating>.form-control:not(:-ms-input-placeholder) {
padding-top: 1.625rem;
padding-bottom: 0.625rem;
}
.form-floating>.form-control:focus,
.form-floating>.form-control:not(:placeholder-shown) {
padding-top: 1.625rem;
padding-bottom: 0.625rem;
}
.form-floating>.form-control:-webkit-autofill {
padding-top: 1.625rem;
padding-bottom: 0.625rem;
}
.form-floating>.form-select {
padding-top: 1.625rem;
padding-bottom: 0.625rem;
}
.form-floating>.form-control:not(:-moz-placeholder-shown)~label {
opacity: 0.65;
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
}
.form-floating>.form-control:not(:-ms-input-placeholder)~label {
opacity: 0.65;
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
}
.form-floating>.form-control:focus~label,
.form-floating>.form-control:not(:placeholder-shown)~label,
.form-floating>.form-select~label {
opacity: 0.65;
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
}
.form-floating>.form-control:-webkit-autofill~label {
opacity: 0.65;
transform: scale(0.85) translateY(-0.5rem) translateX(0.15rem);
}
.input-group {
position: relative;
display: flex;
flex-wrap: wrap;
align-items: stretch;
width: 100%;
}
.input-group>.form-control,
.input-group>.form-select {
position: relative;
flex: 1 1 auto;
width: 1%;
min-width: 0;
}
.input-group>.form-control:focus,
.input-group>.form-select:focus {
z-index: 3;
}
.input-group .btn {
position: relative;
z-index: 2;
}
.input-group .btn:focus {
z-index: 3;
}
.input-group-text {
display: flex;
align-items: center;
padding: 0.375rem 0.75rem;
font-size: 1rem;
font-weight: 400;
line-height: 1.5;
color: #212529;
text-align: center;
white-space: nowrap;
background-color: #e9ecef;
border: 1px solid #ced4da;
border-radius: 0.25rem;
}
.input-group-lg>.form-control,
.input-group-lg>.form-select,
.input-group-lg>.input-group-text,
.input-group-lg>.btn {
padding: 0.5rem 1rem;
font-size: 1.25rem;
border-radius: 0.3rem;
}
.input-group-sm>.form-control,
.input-group-sm>.form-select,
.input-group-sm>.input-group-text,
.input-group-sm>.btn {
padding: 0.25rem 0.5rem;
font-size: 0.875rem;
border-radius: 0.2rem;
}
.input-group-lg>.form-select,
.input-group-sm>.form-select {
padding-right: 3rem;
}
.input-group:not(.has-validation)> :not(:last-child):not(.dropdown-toggle):not(.dropdown-menu),
.input-group:not(.has-validation)>.dropdown-toggle:nth-last-child(n+3) {
border-top-right-radius: 0;
border-bottom-right-radius: 0;
}
.input-group.has-validation> :nth-last-child(n+3):not(.dropdown-toggle):not(.dropdown-menu),
.input-group.has-validation>.dropdown-toggle:nth-last-child(n+4) {
border-top-right-radius: 0;
border-bottom-right-radius: 0;
}
.input-group> :not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback) {
margin-left: -1px;
border-top-left-radius: 0;
border-bottom-left-radius: 0;
}
.valid-feedback {
display: none;
width: 100%;
margin-top: 0.25rem;
font-size: 0.875em;
color: #198754;
}
.valid-tooltip {
position: absolute;
top: 100%;
z-index: 5;
display: none;
max-width: 100%;
padding: 0.25rem 0.5rem;
margin-top: 0.1rem;
font-size: 0.875rem;
color: #fff;
background-color: rgba(25, 135, 84, 0.9);
border-radius: 0.25rem;
}
.was-validated :valid~.valid-feedback,
.was-validated :valid~.valid-tooltip,
.is-valid~.valid-feedback,
.is-valid~.valid-tooltip {
display: block;
}
.was-validated .form-control:valid,
.form-control.is-valid {
border-color: #198754;
padding-right: calc(1.5em + 0.75rem);
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");
background-repeat: no-repeat;
background-position: right calc(0.375em + 0.1875rem) center;
background-size: calc(0.75em + 0.375rem) calc(0.75em + 0.375rem);
}
.was-validated .form-control:valid:focus,
.form-control.is-valid:focus {
border-color: #198754;
box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.25);
}
.was-validated textarea.form-control:valid,
textarea.form-control.is-valid {
padding-right: calc(1.5em + 0.75rem);
background-position: top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem);
}
.was-validated .form-select:valid,
.form-select.is-valid {
border-color: #198754;
}
.was-validated .form-select:valid:not([multiple]):not([size]),
.was-validated .form-select:valid:not([multiple])[size="1"],
.form-select.is-valid:not([multiple]):not([size]),
.form-select.is-valid:not([multiple])[size="1"] {
padding-right: 4.125rem;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"), url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");
background-position: right 0.75rem center, center right 2.25rem;
background-size: 16px 12px, calc(0.75em + 0.375rem) calc(0.75em + 0.375rem);
}
.was-validated .form-select:valid:focus,
.form-select.is-valid:focus {
border-color: #198754;
box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.25);
}
.was-validated .form-check-input:valid,
.form-check-input.is-valid {
border-color: #198754;
}
.was-validated .form-check-input:valid:checked,
.form-check-input.is-valid:checked {
background-color: #198754;
}
.was-validated .form-check-input:valid:focus,
.form-check-input.is-valid:focus {
box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.25);
}
.was-validated .form-check-input:valid~.form-check-label,
.form-check-input.is-valid~.form-check-label {
color: #198754;
}
.form-check-inline .form-check-input~.valid-feedback {
margin-left: 0.5em;
}
.was-validated .input-group .form-control:valid,
.input-group .form-control.is-valid,
.was-validated .input-group .form-select:valid,
.input-group .form-select.is-valid {
z-index: 1;
}
.was-validated .input-group .form-control:valid:focus,
.input-group .form-control.is-valid:focus,
.was-validated .input-group .form-select:valid:focus,
.input-group .form-select.is-valid:focus {
z-index: 3;
}
.invalid-feedback {
display: none;
width: 100%;
margin-top: 0.25rem;
font-size: 0.875em;
color: #dc3545;
}
.invalid-tooltip {
position: absolute;
top: 100%;
z-index: 5;
display: none;
max-width: 100%;
padding: 0.25rem 0.5rem;
margin-top: 0.1rem;
font-size: 0.875rem;
color: #fff;
background-color: rgba(220, 53, 69, 0.9);
border-radius: 0.25rem;
}
.was-validated :invalid~.invalid-feedback,
.was-validated :invalid~.invalid-tooltip,
.is-invalid~.invalid-feedback,
.is-invalid~.invalid-tooltip {
display: block;
}
.was-validated .form-control:invalid,
.form-control.is-invalid {
border-color: #dc3545;
padding-right: calc(1.5em + 0.75rem);
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");
background-repeat: no-repeat;
background-position: right calc(0.375em + 0.1875rem) center;
background-size: calc(0.75em + 0.375rem) calc(0.75em + 0.375rem);
}
.was-validated .form-control:invalid:focus,
.form-control.is-invalid:focus {
border-color: #dc3545;
box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.25);
}
.was-validated textarea.form-control:invalid,
textarea.form-control.is-invalid {
padding-right: calc(1.5em + 0.75rem);
background-position: top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem);
}
.was-validated .form-select:invalid,
.form-select.is-invalid {
border-color: #dc3545;
}
.was-validated .form-select:invalid:not([multiple]):not([size]),
.was-validated .form-select:invalid:not([multiple])[size="1"],
.form-select.is-invalid:not([multiple]):not([size]),
.form-select.is-invalid:not([multiple])[size="1"] {
padding-right: 4.125rem;
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"), url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");
background-position: right 0.75rem center, center right 2.25rem;
background-size: 16px 12px, calc(0.75em + 0.375rem) calc(0.75em + 0.375rem);
}
.was-validated .form-select:invalid:focus,
.form-select.is-invalid:focus {
border-color: #dc3545;
box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.25);
}
.was-validated .form-check-input:invalid,
.form-check-input.is-invalid {
border-color: #dc3545;
}
.was-validated .form-check-input:invalid:checked,
.form-check-input.is-invalid:checked {
background-color: #dc3545;
}
.was-validated .form-check-input:invalid:focus,
.form-check-input.is-invalid:focus {
box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.25);
}
.was-validated .form-check-input:invalid~.form-check-label,
.form-check-input.is-invalid~.form-check-label {
color: #dc3545;
}
.form-check-inline .form-check-input~.invalid-feedback {
margin-left: 0.5em;
}
.was-validated .input-group .form-control:invalid,
.input-group .form-control.is-invalid,
.was-validated .input-group .form-select:invalid,
.input-group .form-select.is-invalid {
z-index: 2;
}
.was-validated .input-group .form-control:invalid:focus,
.input-group .form-control.is-invalid:focus,
.was-validated .input-group .form-select:invalid:focus,
.input-group .form-select.is-invalid:focus {
z-index: 3;
}
.btn {
display: inline-block;
font-weight: 400;
line-height: 1.5;
color: #212529;
text-align: center;
text-decoration: none;
vertical-align: middle;
cursor: pointer;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
background-color: transparent;
border: 1px solid transparent;
padding: 0.375rem 0.75rem;
font-size: 1rem;
border-radius: 0.25rem;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.btn {
transition: none;
}
}
.btn:hover {
color: #212529;
}
.btn-check:focus+.btn,
.btn:focus {
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.btn:disabled,
.btn.disabled,
fieldset:disabled .btn {
pointer-events: none;
opacity: 0.65;
}
.btn-primary {
color: #fff;
background-color: #0d6efd;
border-color: #0d6efd;
}
.btn-primary:hover {
color: #fff;
background-color: #0b5ed7;
border-color: #0a58ca;
}
.btn-check:focus+.btn-primary,
.btn-primary:focus {
color: #fff;
background-color: #0b5ed7;
border-color: #0a58ca;
box-shadow: 0 0 0 0.25rem rgba(49, 132, 253, 0.5);
}
.btn-check:checked+.btn-primary,
.btn-check:active+.btn-primary,
.btn-primary:active,
.btn-primary.active,
.show>.btn-primary.dropdown-toggle {
color: #fff;
background-color: #0a58ca;
border-color: #0a53be;
}
.btn-check:checked+.btn-primary:focus,
.btn-check:active+.btn-primary:focus,
.btn-primary:active:focus,
.btn-primary.active:focus,
.show>.btn-primary.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(49, 132, 253, 0.5);
}
.btn-primary:disabled,
.btn-primary.disabled {
color: #fff;
background-color: #0d6efd;
border-color: #0d6efd;
}
.btn-secondary {
color: #fff;
background-color: #6c757d;
border-color: #6c757d;
}
.btn-secondary:hover {
color: #fff;
background-color: #5c636a;
border-color: #565e64;
}
.btn-check:focus+.btn-secondary,
.btn-secondary:focus {
color: #fff;
background-color: #5c636a;
border-color: #565e64;
box-shadow: 0 0 0 0.25rem rgba(130, 138, 145, 0.5);
}
.btn-check:checked+.btn-secondary,
.btn-check:active+.btn-secondary,
.btn-secondary:active,
.btn-secondary.active,
.show>.btn-secondary.dropdown-toggle {
color: #fff;
background-color: #565e64;
border-color: #51585e;
}
.btn-check:checked+.btn-secondary:focus,
.btn-check:active+.btn-secondary:focus,
.btn-secondary:active:focus,
.btn-secondary.active:focus,
.show>.btn-secondary.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(130, 138, 145, 0.5);
}
.btn-secondary:disabled,
.btn-secondary.disabled {
color: #fff;
background-color: #6c757d;
border-color: #6c757d;
}
.btn-success {
color: #fff;
background-color: #198754;
border-color: #198754;
}
.btn-success:hover {
color: #fff;
background-color: #157347;
border-color: #146c43;
}
.btn-check:focus+.btn-success,
.btn-success:focus {
color: #fff;
background-color: #157347;
border-color: #146c43;
box-shadow: 0 0 0 0.25rem rgba(60, 153, 110, 0.5);
}
.btn-check:checked+.btn-success,
.btn-check:active+.btn-success,
.btn-success:active,
.btn-success.active,
.show>.btn-success.dropdown-toggle {
color: #fff;
background-color: #146c43;
border-color: #13653f;
}
.btn-check:checked+.btn-success:focus,
.btn-check:active+.btn-success:focus,
.btn-success:active:focus,
.btn-success.active:focus,
.show>.btn-success.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(60, 153, 110, 0.5);
}
.btn-success:disabled,
.btn-success.disabled {
color: #fff;
background-color: #198754;
border-color: #198754;
}
.btn-info {
color: #000;
background-color: #0dcaf0;
border-color: #0dcaf0;
}
.btn-info:hover {
color: #000;
background-color: #31d2f2;
border-color: #25cff2;
}
.btn-check:focus+.btn-info,
.btn-info:focus {
color: #000;
background-color: #31d2f2;
border-color: #25cff2;
box-shadow: 0 0 0 0.25rem rgba(11, 172, 204, 0.5);
}
.btn-check:checked+.btn-info,
.btn-check:active+.btn-info,
.btn-info:active,
.btn-info.active,
.show>.btn-info.dropdown-toggle {
color: #000;
background-color: #3dd5f3;
border-color: #25cff2;
}
.btn-check:checked+.btn-info:focus,
.btn-check:active+.btn-info:focus,
.btn-info:active:focus,
.btn-info.active:focus,
.show>.btn-info.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(11, 172, 204, 0.5);
}
.btn-info:disabled,
.btn-info.disabled {
color: #000;
background-color: #0dcaf0;
border-color: #0dcaf0;
}
.btn-warning {
color: #000;
background-color: #ffc107;
border-color: #ffc107;
}
.btn-warning:hover {
color: #000;
background-color: #ffca2c;
border-color: #ffc720;
}
.btn-check:focus+.btn-warning,
.btn-warning:focus {
color: #000;
background-color: #ffca2c;
border-color: #ffc720;
box-shadow: 0 0 0 0.25rem rgba(217, 164, 6, 0.5);
}
.btn-check:checked+.btn-warning,
.btn-check:active+.btn-warning,
.btn-warning:active,
.btn-warning.active,
.show>.btn-warning.dropdown-toggle {
color: #000;
background-color: #ffcd39;
border-color: #ffc720;
}
.btn-check:checked+.btn-warning:focus,
.btn-check:active+.btn-warning:focus,
.btn-warning:active:focus,
.btn-warning.active:focus,
.show>.btn-warning.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(217, 164, 6, 0.5);
}
.btn-warning:disabled,
.btn-warning.disabled {
color: #000;
background-color: #ffc107;
border-color: #ffc107;
}
.btn-danger {
color: #fff;
background-color: #dc3545;
border-color: #dc3545;
}
.btn-danger:hover {
color: #fff;
background-color: #bb2d3b;
border-color: #b02a37;
}
.btn-check:focus+.btn-danger,
.btn-danger:focus {
color: #fff;
background-color: #bb2d3b;
border-color: #b02a37;
box-shadow: 0 0 0 0.25rem rgba(225, 83, 97, 0.5);
}
.btn-check:checked+.btn-danger,
.btn-check:active+.btn-danger,
.btn-danger:active,
.btn-danger.active,
.show>.btn-danger.dropdown-toggle {
color: #fff;
background-color: #b02a37;
border-color: #a52834;
}
.btn-check:checked+.btn-danger:focus,
.btn-check:active+.btn-danger:focus,
.btn-danger:active:focus,
.btn-danger.active:focus,
.show>.btn-danger.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(225, 83, 97, 0.5);
}
.btn-danger:disabled,
.btn-danger.disabled {
color: #fff;
background-color: #dc3545;
border-color: #dc3545;
}
.btn-light {
color: #000;
background-color: #f8f9fa;
border-color: #f8f9fa;
}
.btn-light:hover {
color: #000;
background-color: #f9fafb;
border-color: #f9fafb;
}
.btn-check:focus+.btn-light,
.btn-light:focus {
color: #000;
background-color: #f9fafb;
border-color: #f9fafb;
box-shadow: 0 0 0 0.25rem rgba(211, 212, 213, 0.5);
}
.btn-check:checked+.btn-light,
.btn-check:active+.btn-light,
.btn-light:active,
.btn-light.active,
.show>.btn-light.dropdown-toggle {
color: #000;
background-color: #f9fafb;
border-color: #f9fafb;
}
.btn-check:checked+.btn-light:focus,
.btn-check:active+.btn-light:focus,
.btn-light:active:focus,
.btn-light.active:focus,
.show>.btn-light.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(211, 212, 213, 0.5);
}
.btn-light:disabled,
.btn-light.disabled {
color: #000;
background-color: #f8f9fa;
border-color: #f8f9fa;
}
.btn-dark {
color: #fff;
background-color: #212529;
border-color: #212529;
}
.btn-dark:hover {
color: #fff;
background-color: #1c1f23;
border-color: #1a1e21;
}
.btn-check:focus+.btn-dark,
.btn-dark:focus {
color: #fff;
background-color: #1c1f23;
border-color: #1a1e21;
box-shadow: 0 0 0 0.25rem rgba(66, 70, 73, 0.5);
}
.btn-check:checked+.btn-dark,
.btn-check:active+.btn-dark,
.btn-dark:active,
.btn-dark.active,
.show>.btn-dark.dropdown-toggle {
color: #fff;
background-color: #1a1e21;
border-color: #191c1f;
}
.btn-check:checked+.btn-dark:focus,
.btn-check:active+.btn-dark:focus,
.btn-dark:active:focus,
.btn-dark.active:focus,
.show>.btn-dark.dropdown-toggle:focus {
box-shadow: 0 0 0 0.25rem rgba(66, 70, 73, 0.5);
}
.btn-dark:disabled,
.btn-dark.disabled {
color: #fff;
background-color: #212529;
border-color: #212529;
}
.btn-outline-primary {
color: #0d6efd;
border-color: #0d6efd;
}
.btn-outline-primary:hover {
color: #fff;
background-color: #0d6efd;
border-color: #0d6efd;
}
.btn-check:focus+.btn-outline-primary,
.btn-outline-primary:focus {
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.5);
}
.btn-check:checked+.btn-outline-primary,
.btn-check:active+.btn-outline-primary,
.btn-outline-primary:active,
.btn-outline-primary.active,
.btn-outline-primary.dropdown-toggle.show {
color: #fff;
background-color: #0d6efd;
border-color: #0d6efd;
}
.btn-check:checked+.btn-outline-primary:focus,
.btn-check:active+.btn-outline-primary:focus,
.btn-outline-primary:active:focus,
.btn-outline-primary.active:focus,
.btn-outline-primary.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.5);
}
.btn-outline-primary:disabled,
.btn-outline-primary.disabled {
color: #0d6efd;
background-color: transparent;
}
.btn-outline-secondary {
color: #6c757d;
border-color: #6c757d;
}
.btn-outline-secondary:hover {
color: #fff;
background-color: #6c757d;
border-color: #6c757d;
}
.btn-check:focus+.btn-outline-secondary,
.btn-outline-secondary:focus {
box-shadow: 0 0 0 0.25rem rgba(108, 117, 125, 0.5);
}
.btn-check:checked+.btn-outline-secondary,
.btn-check:active+.btn-outline-secondary,
.btn-outline-secondary:active,
.btn-outline-secondary.active,
.btn-outline-secondary.dropdown-toggle.show {
color: #fff;
background-color: #6c757d;
border-color: #6c757d;
}
.btn-check:checked+.btn-outline-secondary:focus,
.btn-check:active+.btn-outline-secondary:focus,
.btn-outline-secondary:active:focus,
.btn-outline-secondary.active:focus,
.btn-outline-secondary.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(108, 117, 125, 0.5);
}
.btn-outline-secondary:disabled,
.btn-outline-secondary.disabled {
color: #6c757d;
background-color: transparent;
}
.btn-outline-success {
color: #198754;
border-color: #198754;
}
.btn-outline-success:hover {
color: #fff;
background-color: #198754;
border-color: #198754;
}
.btn-check:focus+.btn-outline-success,
.btn-outline-success:focus {
box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.5);
}
.btn-check:checked+.btn-outline-success,
.btn-check:active+.btn-outline-success,
.btn-outline-success:active,
.btn-outline-success.active,
.btn-outline-success.dropdown-toggle.show {
color: #fff;
background-color: #198754;
border-color: #198754;
}
.btn-check:checked+.btn-outline-success:focus,
.btn-check:active+.btn-outline-success:focus,
.btn-outline-success:active:focus,
.btn-outline-success.active:focus,
.btn-outline-success.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(25, 135, 84, 0.5);
}
.btn-outline-success:disabled,
.btn-outline-success.disabled {
color: #198754;
background-color: transparent;
}
.btn-outline-info {
color: #0dcaf0;
border-color: #0dcaf0;
}
.btn-outline-info:hover {
color: #000;
background-color: #0dcaf0;
border-color: #0dcaf0;
}
.btn-check:focus+.btn-outline-info,
.btn-outline-info:focus {
box-shadow: 0 0 0 0.25rem rgba(13, 202, 240, 0.5);
}
.btn-check:checked+.btn-outline-info,
.btn-check:active+.btn-outline-info,
.btn-outline-info:active,
.btn-outline-info.active,
.btn-outline-info.dropdown-toggle.show {
color: #000;
background-color: #0dcaf0;
border-color: #0dcaf0;
}
.btn-check:checked+.btn-outline-info:focus,
.btn-check:active+.btn-outline-info:focus,
.btn-outline-info:active:focus,
.btn-outline-info.active:focus,
.btn-outline-info.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(13, 202, 240, 0.5);
}
.btn-outline-info:disabled,
.btn-outline-info.disabled {
color: #0dcaf0;
background-color: transparent;
}
.btn-outline-warning {
color: #ffc107;
border-color: #ffc107;
}
.btn-outline-warning:hover {
color: #000;
background-color: #ffc107;
border-color: #ffc107;
}
.btn-check:focus+.btn-outline-warning,
.btn-outline-warning:focus {
box-shadow: 0 0 0 0.25rem rgba(255, 193, 7, 0.5);
}
.btn-check:checked+.btn-outline-warning,
.btn-check:active+.btn-outline-warning,
.btn-outline-warning:active,
.btn-outline-warning.active,
.btn-outline-warning.dropdown-toggle.show {
color: #000;
background-color: #ffc107;
border-color: #ffc107;
}
.btn-check:checked+.btn-outline-warning:focus,
.btn-check:active+.btn-outline-warning:focus,
.btn-outline-warning:active:focus,
.btn-outline-warning.active:focus,
.btn-outline-warning.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(255, 193, 7, 0.5);
}
.btn-outline-warning:disabled,
.btn-outline-warning.disabled {
color: #ffc107;
background-color: transparent;
}
.btn-outline-danger {
color: #dc3545;
border-color: #dc3545;
}
.btn-outline-danger:hover {
color: #fff;
background-color: #dc3545;
border-color: #dc3545;
}
.btn-check:focus+.btn-outline-danger,
.btn-outline-danger:focus {
box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.5);
}
.btn-check:checked+.btn-outline-danger,
.btn-check:active+.btn-outline-danger,
.btn-outline-danger:active,
.btn-outline-danger.active,
.btn-outline-danger.dropdown-toggle.show {
color: #fff;
background-color: #dc3545;
border-color: #dc3545;
}
.btn-check:checked+.btn-outline-danger:focus,
.btn-check:active+.btn-outline-danger:focus,
.btn-outline-danger:active:focus,
.btn-outline-danger.active:focus,
.btn-outline-danger.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(220, 53, 69, 0.5);
}
.btn-outline-danger:disabled,
.btn-outline-danger.disabled {
color: #dc3545;
background-color: transparent;
}
.btn-outline-light {
color: #f8f9fa;
border-color: #f8f9fa;
}
.btn-outline-light:hover {
color: #000;
background-color: #f8f9fa;
border-color: #f8f9fa;
}
.btn-check:focus+.btn-outline-light,
.btn-outline-light:focus {
box-shadow: 0 0 0 0.25rem rgba(248, 249, 250, 0.5);
}
.btn-check:checked+.btn-outline-light,
.btn-check:active+.btn-outline-light,
.btn-outline-light:active,
.btn-outline-light.active,
.btn-outline-light.dropdown-toggle.show {
color: #000;
background-color: #f8f9fa;
border-color: #f8f9fa;
}
.btn-check:checked+.btn-outline-light:focus,
.btn-check:active+.btn-outline-light:focus,
.btn-outline-light:active:focus,
.btn-outline-light.active:focus,
.btn-outline-light.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(248, 249, 250, 0.5);
}
.btn-outline-light:disabled,
.btn-outline-light.disabled {
color: #f8f9fa;
background-color: transparent;
}
.btn-outline-dark {
color: #212529;
border-color: #212529;
}
.btn-outline-dark:hover {
color: #fff;
background-color: #212529;
border-color: #212529;
}
.btn-check:focus+.btn-outline-dark,
.btn-outline-dark:focus {
box-shadow: 0 0 0 0.25rem rgba(33, 37, 41, 0.5);
}
.btn-check:checked+.btn-outline-dark,
.btn-check:active+.btn-outline-dark,
.btn-outline-dark:active,
.btn-outline-dark.active,
.btn-outline-dark.dropdown-toggle.show {
color: #fff;
background-color: #212529;
border-color: #212529;
}
.btn-check:checked+.btn-outline-dark:focus,
.btn-check:active+.btn-outline-dark:focus,
.btn-outline-dark:active:focus,
.btn-outline-dark.active:focus,
.btn-outline-dark.dropdown-toggle.show:focus {
box-shadow: 0 0 0 0.25rem rgba(33, 37, 41, 0.5);
}
.btn-outline-dark:disabled,
.btn-outline-dark.disabled {
color: #212529;
background-color: transparent;
}
.btn-link {
font-weight: 400;
color: #0d6efd;
text-decoration: underline;
}
.btn-link:hover {
color: #0a58ca;
}
.btn-link:disabled,
.btn-link.disabled {
color: #6c757d;
}
.btn-lg,
.btn-group-lg>.btn {
padding: 0.5rem 1rem;
font-size: 1.25rem;
border-radius: 0.3rem;
}
.btn-sm,
.btn-group-sm>.btn {
padding: 0.25rem 0.5rem;
font-size: 0.875rem;
border-radius: 0.2rem;
}
.fade {
transition: opacity 0.15s linear;
}
@media (prefers-reduced-motion: reduce) {
.fade {
transition: none;
}
}
.fade:not(.show) {
opacity: 0;
}
.collapse:not(.show) {
display: none;
}
.collapsing {
height: 0;
overflow: hidden;
transition: height 0.35s ease;
}
@media (prefers-reduced-motion: reduce) {
.collapsing {
transition: none;
}
}
.collapsing.collapse-horizontal {
width: 0;
height: auto;
transition: width 0.35s ease;
}
@media (prefers-reduced-motion: reduce) {
.collapsing.collapse-horizontal {
transition: none;
}
}
.dropup,
.dropend,
.dropdown,
.dropstart {
position: relative;
}
.dropdown-toggle {
white-space: nowrap;
}
.dropdown-toggle::after {
display: inline-block;
margin-left: 0.255em;
vertical-align: 0.255em;
content: "";
border-top: 0.3em solid;
border-right: 0.3em solid transparent;
border-bottom: 0;
border-left: 0.3em solid transparent;
}
.dropdown-toggle:empty::after {
margin-left: 0;
}
.dropdown-menu {
position: absolute;
z-index: 1000;
display: none;
min-width: 10rem;
padding: 0.5rem 0;
margin: 0;
font-size: 1rem;
color: #212529;
text-align: left;
list-style: none;
background-color: #fff;
background-clip: padding-box;
border: 1px solid rgba(0, 0, 0, 0.15);
border-radius: 0.25rem;
}
.dropdown-menu[data-bs-popper] {
top: 100%;
left: 0;
margin-top: 0.125rem;
}
.dropdown-menu-start {
--bs-position: start;
}
.dropdown-menu-start[data-bs-popper] {
right: auto;
left: 0;
}
.dropdown-menu-end {
--bs-position: end;
}
.dropdown-menu-end[data-bs-popper] {
right: 0;
left: auto;
}
@media (min-width: 576px) {
.dropdown-menu-sm-start {
--bs-position: start;
}
.dropdown-menu-sm-start[data-bs-popper] {
right: auto;
left: 0;
}
.dropdown-menu-sm-end {
--bs-position: end;
}
.dropdown-menu-sm-end[data-bs-popper] {
right: 0;
left: auto;
}
}
@media (min-width: 768px) {
.dropdown-menu-md-start {
--bs-position: start;
}
.dropdown-menu-md-start[data-bs-popper] {
right: auto;
left: 0;
}
.dropdown-menu-md-end {
--bs-position: end;
}
.dropdown-menu-md-end[data-bs-popper] {
right: 0;
left: auto;
}
}
@media (min-width: 992px) {
.dropdown-menu-lg-start {
--bs-position: start;
}
.dropdown-menu-lg-start[data-bs-popper] {
right: auto;
left: 0;
}
.dropdown-menu-lg-end {
--bs-position: end;
}
.dropdown-menu-lg-end[data-bs-popper] {
right: 0;
left: auto;
}
}
@media (min-width: 1200px) {
.dropdown-menu-xl-start {
--bs-position: start;
}
.dropdown-menu-xl-start[data-bs-popper] {
right: auto;
left: 0;
}
.dropdown-menu-xl-end {
--bs-position: end;
}
.dropdown-menu-xl-end[data-bs-popper] {
right: 0;
left: auto;
}
}
@media (min-width: 1400px) {
.dropdown-menu-xxl-start {
--bs-position: start;
}
.dropdown-menu-xxl-start[data-bs-popper] {
right: auto;
left: 0;
}
.dropdown-menu-xxl-end {
--bs-position: end;
}
.dropdown-menu-xxl-end[data-bs-popper] {
right: 0;
left: auto;
}
}
.dropup .dropdown-menu[data-bs-popper] {
top: auto;
bottom: 100%;
margin-top: 0;
margin-bottom: 0.125rem;
}
.dropup .dropdown-toggle::after {
display: inline-block;
margin-left: 0.255em;
vertical-align: 0.255em;
content: "";
border-top: 0;
border-right: 0.3em solid transparent;
border-bottom: 0.3em solid;
border-left: 0.3em solid transparent;
}
.dropup .dropdown-toggle:empty::after {
margin-left: 0;
}
.dropend .dropdown-menu[data-bs-popper] {
top: 0;
right: auto;
left: 100%;
margin-top: 0;
margin-left: 0.125rem;
}
.dropend .dropdown-toggle::after {
display: inline-block;
margin-left: 0.255em;
vertical-align: 0.255em;
content: "";
border-top: 0.3em solid transparent;
border-right: 0;
border-bottom: 0.3em solid transparent;
border-left: 0.3em solid;
}
.dropend .dropdown-toggle:empty::after {
margin-left: 0;
}
.dropend .dropdown-toggle::after {
vertical-align: 0;
}
.dropstart .dropdown-menu[data-bs-popper] {
top: 0;
right: 100%;
left: auto;
margin-top: 0;
margin-right: 0.125rem;
}
.dropstart .dropdown-toggle::after {
display: inline-block;
margin-left: 0.255em;
vertical-align: 0.255em;
content: "";
}
.dropstart .dropdown-toggle::after {
display: none;
}
.dropstart .dropdown-toggle::before {
display: inline-block;
margin-right: 0.255em;
vertical-align: 0.255em;
content: "";
border-top: 0.3em solid transparent;
border-right: 0.3em solid;
border-bottom: 0.3em solid transparent;
}
.dropstart .dropdown-toggle:empty::after {
margin-left: 0;
}
.dropstart .dropdown-toggle::before {
vertical-align: 0;
}
.dropdown-divider {
height: 0;
margin: 0.5rem 0;
overflow: hidden;
border-top: 1px solid rgba(0, 0, 0, 0.15);
}
.dropdown-item {
display: block;
width: 100%;
padding: 0.25rem 1rem;
clear: both;
font-weight: 400;
color: #212529;
text-align: inherit;
text-decoration: none;
white-space: nowrap;
background-color: transparent;
border: 0;
}
.dropdown-item:hover,
.dropdown-item:focus {
color: #1e2125;
background-color: #e9ecef;
}
.dropdown-item.active,
.dropdown-item:active {
color: #fff;
text-decoration: none;
background-color: #0d6efd;
}
.dropdown-item.disabled,
.dropdown-item:disabled {
color: #adb5bd;
pointer-events: none;
background-color: transparent;
}
.dropdown-menu.show {
display: block;
}
.dropdown-header {
display: block;
padding: 0.5rem 1rem;
margin-bottom: 0;
font-size: 0.875rem;
color: #6c757d;
white-space: nowrap;
}
.dropdown-item-text {
display: block;
padding: 0.25rem 1rem;
color: #212529;
}
.dropdown-menu-dark {
color: #dee2e6;
background-color: #343a40;
border-color: rgba(0, 0, 0, 0.15);
}
.dropdown-menu-dark .dropdown-item {
color: #dee2e6;
}
.dropdown-menu-dark .dropdown-item:hover,
.dropdown-menu-dark .dropdown-item:focus {
color: #fff;
background-color: rgba(255, 255, 255, 0.15);
}
.dropdown-menu-dark .dropdown-item.active,
.dropdown-menu-dark .dropdown-item:active {
color: #fff;
background-color: #0d6efd;
}
.dropdown-menu-dark .dropdown-item.disabled,
.dropdown-menu-dark .dropdown-item:disabled {
color: #adb5bd;
}
.dropdown-menu-dark .dropdown-divider {
border-color: rgba(0, 0, 0, 0.15);
}
.dropdown-menu-dark .dropdown-item-text {
color: #dee2e6;
}
.dropdown-menu-dark .dropdown-header {
color: #adb5bd;
}
.btn-group,
.btn-group-vertical {
position: relative;
display: inline-flex;
vertical-align: middle;
}
.btn-group>.btn,
.btn-group-vertical>.btn {
position: relative;
flex: 1 1 auto;
}
.btn-group>.btn-check:checked+.btn,
.btn-group>.btn-check:focus+.btn,
.btn-group>.btn:hover,
.btn-group>.btn:focus,
.btn-group>.btn:active,
.btn-group>.btn.active,
.btn-group-vertical>.btn-check:checked+.btn,
.btn-group-vertical>.btn-check:focus+.btn,
.btn-group-vertical>.btn:hover,
.btn-group-vertical>.btn:focus,
.btn-group-vertical>.btn:active,
.btn-group-vertical>.btn.active {
z-index: 1;
}
.btn-toolbar {
display: flex;
flex-wrap: wrap;
justify-content: flex-start;
}
.btn-toolbar .input-group {
width: auto;
}
.btn-group>.btn:not(:first-child),
.btn-group>.btn-group:not(:first-child) {
margin-left: -1px;
}
.btn-group>.btn:not(:last-child):not(.dropdown-toggle),
.btn-group>.btn-group:not(:last-child)>.btn {
border-top-right-radius: 0;
border-bottom-right-radius: 0;
}
.btn-group>.btn:nth-child(n+3),
.btn-group> :not(.btn-check)+.btn,
.btn-group>.btn-group:not(:first-child)>.btn {
border-top-left-radius: 0;
border-bottom-left-radius: 0;
}
.dropdown-toggle-split {
padding-right: 0.5625rem;
padding-left: 0.5625rem;
}
.dropdown-toggle-split::after,
.dropup .dropdown-toggle-split::after,
.dropend .dropdown-toggle-split::after {
margin-left: 0;
}
.dropstart .dropdown-toggle-split::before {
margin-right: 0;
}
.btn-sm+.dropdown-toggle-split,
.btn-group-sm>.btn+.dropdown-toggle-split {
padding-right: 0.375rem;
padding-left: 0.375rem;
}
.btn-lg+.dropdown-toggle-split,
.btn-group-lg>.btn+.dropdown-toggle-split {
padding-right: 0.75rem;
padding-left: 0.75rem;
}
.btn-group-vertical {
flex-direction: column;
align-items: flex-start;
justify-content: center;
}
.btn-group-vertical>.btn,
.btn-group-vertical>.btn-group {
width: 100%;
}
.btn-group-vertical>.btn:not(:first-child),
.btn-group-vertical>.btn-group:not(:first-child) {
margin-top: -1px;
}
.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle),
.btn-group-vertical>.btn-group:not(:last-child)>.btn {
border-bottom-right-radius: 0;
border-bottom-left-radius: 0;
}
.btn-group-vertical>.btn~.btn,
.btn-group-vertical>.btn-group:not(:first-child)>.btn {
border-top-left-radius: 0;
border-top-right-radius: 0;
}
.nav {
display: flex;
flex-wrap: wrap;
padding-left: 0;
margin-bottom: 0;
list-style: none;
}
.nav-link {
display: block;
padding: 0.5rem 1rem;
color: #0d6efd;
text-decoration: none;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.nav-link {
transition: none;
}
}
.nav-link:hover,
.nav-link:focus {
color: #0a58ca;
}
.nav-link.disabled {
color: #6c757d;
pointer-events: none;
cursor: default;
}
.nav-tabs {
border-bottom: 1px solid #dee2e6;
}
.nav-tabs .nav-link {
margin-bottom: -1px;
background: none;
border: 1px solid transparent;
border-top-left-radius: 0.25rem;
border-top-right-radius: 0.25rem;
}
.nav-tabs .nav-link:hover,
.nav-tabs .nav-link:focus {
border-color: #e9ecef #e9ecef #dee2e6;
isolation: isolate;
}
.nav-tabs .nav-link.disabled {
color: #6c757d;
background-color: transparent;
border-color: transparent;
}
.nav-tabs .nav-link.active,
.nav-tabs .nav-item.show .nav-link {
color: #495057;
background-color: #fff;
border-color: #dee2e6 #dee2e6 #fff;
}
.nav-tabs .dropdown-menu {
margin-top: -1px;
border-top-left-radius: 0;
border-top-right-radius: 0;
}
.nav-pills .nav-link {
background: none;
border: 0;
border-radius: 0.25rem;
}
.nav-pills .nav-link.active,
.nav-pills .show>.nav-link {
color: #fff;
background-color: #0d6efd;
}
.nav-fill>.nav-link,
.nav-fill .nav-item {
flex: 1 1 auto;
text-align: center;
}
.nav-justified>.nav-link,
.nav-justified .nav-item {
flex-basis: 0;
flex-grow: 1;
text-align: center;
}
.nav-fill .nav-item .nav-link,
.nav-justified .nav-item .nav-link {
width: 100%;
}
.tab-content>.tab-pane {
display: none;
}
.tab-content>.active {
display: block;
}
.navbar {
position: relative;
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: space-between;
padding-top: 0.5rem;
padding-bottom: 0.5rem;
}
.navbar>.container,
.navbar>.container-fluid,
.navbar>.container-sm,
.navbar>.container-md,
.navbar>.container-lg,
.navbar>.container-xl,
.navbar>.container-xxl {
display: flex;
flex-wrap: inherit;
align-items: center;
justify-content: space-between;
}
.navbar-brand {
padding-top: 0.3125rem;
padding-bottom: 0.3125rem;
margin-right: 1rem;
font-size: 1.25rem;
text-decoration: none;
white-space: nowrap;
}
.navbar-nav {
display: flex;
flex-direction: column;
padding-left: 0;
margin-bottom: 0;
list-style: none;
}
.navbar-nav .nav-link {
padding-right: 0;
padding-left: 0;
}
.navbar-nav .dropdown-menu {
position: static;
}
.navbar-text {
padding-top: 0.5rem;
padding-bottom: 0.5rem;
}
.navbar-collapse {
flex-basis: 100%;
flex-grow: 1;
align-items: center;
}
.navbar-toggler {
padding: 0.25rem 0.75rem;
font-size: 1.25rem;
line-height: 1;
background-color: transparent;
border: 1px solid transparent;
border-radius: 0.25rem;
transition: box-shadow 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.navbar-toggler {
transition: none;
}
}
.navbar-toggler:hover {
text-decoration: none;
}
.navbar-toggler:focus {
text-decoration: none;
outline: 0;
box-shadow: 0 0 0 0.25rem;
}
.navbar-toggler-icon {
display: inline-block;
width: 1.5em;
height: 1.5em;
vertical-align: middle;
background-repeat: no-repeat;
background-position: center;
background-size: 100%;
}
.navbar-nav-scroll {
max-height: var(--bs-scroll-height, 75vh);
overflow-y: auto;
}
@media (min-width: 576px) {
.navbar-expand-sm {
flex-wrap: nowrap;
justify-content: flex-start;
}
.navbar-expand-sm .navbar-nav {
flex-direction: row;
}
.navbar-expand-sm .navbar-nav .dropdown-menu {
position: absolute;
}
.navbar-expand-sm .navbar-nav .nav-link {
padding-right: 0.5rem;
padding-left: 0.5rem;
}
.navbar-expand-sm .navbar-nav-scroll {
overflow: visible;
}
.navbar-expand-sm .navbar-collapse {
display: flex !important;
flex-basis: auto;
}
.navbar-expand-sm .navbar-toggler {
display: none;
}
.navbar-expand-sm .offcanvas-header {
display: none;
}
.navbar-expand-sm .offcanvas {
position: inherit;
bottom: 0;
z-index: 1000;
flex-grow: 1;
visibility: visible !important;
background-color: transparent;
border-right: 0;
border-left: 0;
transition: none;
transform: none;
}
.navbar-expand-sm .offcanvas-top,
.navbar-expand-sm .offcanvas-bottom {
height: auto;
border-top: 0;
border-bottom: 0;
}
.navbar-expand-sm .offcanvas-body {
display: flex;
flex-grow: 0;
padding: 0;
overflow-y: visible;
}
}
@media (min-width: 768px) {
.navbar-expand-md {
flex-wrap: nowrap;
justify-content: flex-start;
}
.navbar-expand-md .navbar-nav {
flex-direction: row;
}
.navbar-expand-md .navbar-nav .dropdown-menu {
position: absolute;
}
.navbar-expand-md .navbar-nav .nav-link {
padding-right: 0.5rem;
padding-left: 0.5rem;
}
.navbar-expand-md .navbar-nav-scroll {
overflow: visible;
}
.navbar-expand-md .navbar-collapse {
display: flex !important;
flex-basis: auto;
}
.navbar-expand-md .navbar-toggler {
display: none;
}
.navbar-expand-md .offcanvas-header {
display: none;
}
.navbar-expand-md .offcanvas {
position: inherit;
bottom: 0;
z-index: 1000;
flex-grow: 1;
visibility: visible !important;
background-color: transparent;
border-right: 0;
border-left: 0;
transition: none;
transform: none;
}
.navbar-expand-md .offcanvas-top,
.navbar-expand-md .offcanvas-bottom {
height: auto;
border-top: 0;
border-bottom: 0;
}
.navbar-expand-md .offcanvas-body {
display: flex;
flex-grow: 0;
padding: 0;
overflow-y: visible;
}
}
@media (min-width: 992px) {
.navbar-expand-lg {
flex-wrap: nowrap;
justify-content: flex-start;
}
.navbar-expand-lg .navbar-nav {
flex-direction: row;
}
.navbar-expand-lg .navbar-nav .dropdown-menu {
position: absolute;
}
.navbar-expand-lg .navbar-nav .nav-link {
padding-right: 0.5rem;
padding-left: 0.5rem;
}
.navbar-expand-lg .navbar-nav-scroll {
overflow: visible;
}
.navbar-expand-lg .navbar-collapse {
display: flex !important;
flex-basis: auto;
}
.navbar-expand-lg .navbar-toggler {
display: none;
}
.navbar-expand-lg .offcanvas-header {
display: none;
}
.navbar-expand-lg .offcanvas {
position: inherit;
bottom: 0;
z-index: 1000;
flex-grow: 1;
visibility: visible !important;
background-color: transparent;
border-right: 0;
border-left: 0;
transition: none;
transform: none;
}
.navbar-expand-lg .offcanvas-top,
.navbar-expand-lg .offcanvas-bottom {
height: auto;
border-top: 0;
border-bottom: 0;
}
.navbar-expand-lg .offcanvas-body {
display: flex;
flex-grow: 0;
padding: 0;
overflow-y: visible;
}
}
@media (min-width: 1200px) {
.navbar-expand-xl {
flex-wrap: nowrap;
justify-content: flex-start;
}
.navbar-expand-xl .navbar-nav {
flex-direction: row;
}
.navbar-expand-xl .navbar-nav .dropdown-menu {
position: absolute;
}
.navbar-expand-xl .navbar-nav .nav-link {
padding-right: 0.5rem;
padding-left: 0.5rem;
}
.navbar-expand-xl .navbar-nav-scroll {
overflow: visible;
}
.navbar-expand-xl .navbar-collapse {
display: flex !important;
flex-basis: auto;
}
.navbar-expand-xl .navbar-toggler {
display: none;
}
.navbar-expand-xl .offcanvas-header {
display: none;
}
.navbar-expand-xl .offcanvas {
position: inherit;
bottom: 0;
z-index: 1000;
flex-grow: 1;
visibility: visible !important;
background-color: transparent;
border-right: 0;
border-left: 0;
transition: none;
transform: none;
}
.navbar-expand-xl .offcanvas-top,
.navbar-expand-xl .offcanvas-bottom {
height: auto;
border-top: 0;
border-bottom: 0;
}
.navbar-expand-xl .offcanvas-body {
display: flex;
flex-grow: 0;
padding: 0;
overflow-y: visible;
}
}
@media (min-width: 1400px) {
.navbar-expand-xxl {
flex-wrap: nowrap;
justify-content: flex-start;
}
.navbar-expand-xxl .navbar-nav {
flex-direction: row;
}
.navbar-expand-xxl .navbar-nav .dropdown-menu {
position: absolute;
}
.navbar-expand-xxl .navbar-nav .nav-link {
padding-right: 0.5rem;
padding-left: 0.5rem;
}
.navbar-expand-xxl .navbar-nav-scroll {
overflow: visible;
}
.navbar-expand-xxl .navbar-collapse {
display: flex !important;
flex-basis: auto;
}
.navbar-expand-xxl .navbar-toggler {
display: none;
}
.navbar-expand-xxl .offcanvas-header {
display: none;
}
.navbar-expand-xxl .offcanvas {
position: inherit;
bottom: 0;
z-index: 1000;
flex-grow: 1;
visibility: visible !important;
background-color: transparent;
border-right: 0;
border-left: 0;
transition: none;
transform: none;
}
.navbar-expand-xxl .offcanvas-top,
.navbar-expand-xxl .offcanvas-bottom {
height: auto;
border-top: 0;
border-bottom: 0;
}
.navbar-expand-xxl .offcanvas-body {
display: flex;
flex-grow: 0;
padding: 0;
overflow-y: visible;
}
}
.navbar-expand {
flex-wrap: nowrap;
justify-content: flex-start;
}
.navbar-expand .navbar-nav {
flex-direction: row;
}
.navbar-expand .navbar-nav .dropdown-menu {
position: absolute;
}
.navbar-expand .navbar-nav .nav-link {
padding-right: 0.5rem;
padding-left: 0.5rem;
}
.navbar-expand .navbar-nav-scroll {
overflow: visible;
}
.navbar-expand .navbar-collapse {
display: flex !important;
flex-basis: auto;
}
.navbar-expand .navbar-toggler {
display: none;
}
.navbar-expand .offcanvas-header {
display: none;
}
.navbar-expand .offcanvas {
position: inherit;
bottom: 0;
z-index: 1000;
flex-grow: 1;
visibility: visible !important;
background-color: transparent;
border-right: 0;
border-left: 0;
transition: none;
transform: none;
}
.navbar-expand .offcanvas-top,
.navbar-expand .offcanvas-bottom {
height: auto;
border-top: 0;
border-bottom: 0;
}
.navbar-expand .offcanvas-body {
display: flex;
flex-grow: 0;
padding: 0;
overflow-y: visible;
}
.navbar-light .navbar-brand {
color: rgba(0, 0, 0, 0.9);
}
.navbar-light .navbar-brand:hover,
.navbar-light .navbar-brand:focus {
color: rgba(0, 0, 0, 0.9);
}
.navbar-light .navbar-nav .nav-link {
color: rgba(0, 0, 0, 0.55);
}
.navbar-light .navbar-nav .nav-link:hover,
.navbar-light .navbar-nav .nav-link:focus {
color: rgba(0, 0, 0, 0.7);
}
.navbar-light .navbar-nav .nav-link.disabled {
color: rgba(0, 0, 0, 0.3);
}
.navbar-light .navbar-nav .show>.nav-link,
.navbar-light .navbar-nav .nav-link.active {
color: rgba(0, 0, 0, 0.9);
}
.navbar-light .navbar-toggler {
color: rgba(0, 0, 0, 0.55);
border-color: rgba(0, 0, 0, 0.1);
}
.navbar-light .navbar-toggler-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e");
}
.navbar-light .navbar-text {
color: rgba(0, 0, 0, 0.55);
}
.navbar-light .navbar-text a,
.navbar-light .navbar-text a:hover,
.navbar-light .navbar-text a:focus {
color: rgba(0, 0, 0, 0.9);
}
.navbar-dark .navbar-brand {
color: #fff;
}
.navbar-dark .navbar-brand:hover,
.navbar-dark .navbar-brand:focus {
color: #fff;
}
.navbar-dark .navbar-nav .nav-link {
color: rgba(255, 255, 255, 0.55);
}
.navbar-dark .navbar-nav .nav-link:hover,
.navbar-dark .navbar-nav .nav-link:focus {
color: rgba(255, 255, 255, 0.75);
}
.navbar-dark .navbar-nav .nav-link.disabled {
color: rgba(255, 255, 255, 0.25);
}
.navbar-dark .navbar-nav .show>.nav-link,
.navbar-dark .navbar-nav .nav-link.active {
color: #fff;
}
.navbar-dark .navbar-toggler {
color: rgba(255, 255, 255, 0.55);
border-color: rgba(255, 255, 255, 0.1);
}
.navbar-dark .navbar-toggler-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e");
}
.navbar-dark .navbar-text {
color: rgba(255, 255, 255, 0.55);
}
.navbar-dark .navbar-text a,
.navbar-dark .navbar-text a:hover,
.navbar-dark .navbar-text a:focus {
color: #fff;
}
.card {
position: relative;
display: flex;
flex-direction: column;
min-width: 0;
word-wrap: break-word;
background-color: #fff;
background-clip: border-box;
border: 1px solid rgba(0, 0, 0, 0.125);
border-radius: 0.25rem;
}
.card>hr {
margin-right: 0;
margin-left: 0;
}
.card>.list-group {
border-top: inherit;
border-bottom: inherit;
}
.card>.list-group:first-child {
border-top-width: 0;
border-top-left-radius: calc(0.25rem - 1px);
border-top-right-radius: calc(0.25rem - 1px);
}
.card>.list-group:last-child {
border-bottom-width: 0;
border-bottom-right-radius: calc(0.25rem - 1px);
border-bottom-left-radius: calc(0.25rem - 1px);
}
.card>.card-header+.list-group,
.card>.list-group+.card-footer {
border-top: 0;
}
.card-body {
flex: 1 1 auto;
padding: 1rem 1rem;
}
.card-title {
margin-bottom: 0.5rem;
}
.card-subtitle {
margin-top: -0.25rem;
margin-bottom: 0;
}
.card-text:last-child {
margin-bottom: 0;
}
.card-link+.card-link {
margin-left: 1rem;
}
.card-header {
padding: 0.5rem 1rem;
margin-bottom: 0;
background-color: rgba(0, 0, 0, 0.03);
border-bottom: 1px solid rgba(0, 0, 0, 0.125);
}
.card-header:first-child {
border-radius: calc(0.25rem - 1px) calc(0.25rem - 1px) 0 0;
}
.card-footer {
padding: 0.5rem 1rem;
background-color: rgba(0, 0, 0, 0.03);
border-top: 1px solid rgba(0, 0, 0, 0.125);
}
.card-footer:last-child {
border-radius: 0 0 calc(0.25rem - 1px) calc(0.25rem - 1px);
}
.card-header-tabs {
margin-right: -0.5rem;
margin-bottom: -0.5rem;
margin-left: -0.5rem;
border-bottom: 0;
}
.card-header-pills {
margin-right: -0.5rem;
margin-left: -0.5rem;
}
.card-img-overlay {
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
padding: 1rem;
border-radius: calc(0.25rem - 1px);
}
.card-img,
.card-img-top,
.card-img-bottom {
width: 100%;
}
.card-img,
.card-img-top {
border-top-left-radius: calc(0.25rem - 1px);
border-top-right-radius: calc(0.25rem - 1px);
}
.card-img,
.card-img-bottom {
border-bottom-right-radius: calc(0.25rem - 1px);
border-bottom-left-radius: calc(0.25rem - 1px);
}
.card-group>.card {
margin-bottom: 0.75rem;
}
@media (min-width: 576px) {
.card-group {
display: flex;
flex-flow: row wrap;
}
.card-group>.card {
flex: 1 0 0%;
margin-bottom: 0;
}
.card-group>.card+.card {
margin-left: 0;
border-left: 0;
}
.card-group>.card:not(:last-child) {
border-top-right-radius: 0;
border-bottom-right-radius: 0;
}
.card-group>.card:not(:last-child) .card-img-top,
.card-group>.card:not(:last-child) .card-header {
border-top-right-radius: 0;
}
.card-group>.card:not(:last-child) .card-img-bottom,
.card-group>.card:not(:last-child) .card-footer {
border-bottom-right-radius: 0;
}
.card-group>.card:not(:first-child) {
border-top-left-radius: 0;
border-bottom-left-radius: 0;
}
.card-group>.card:not(:first-child) .card-img-top,
.card-group>.card:not(:first-child) .card-header {
border-top-left-radius: 0;
}
.card-group>.card:not(:first-child) .card-img-bottom,
.card-group>.card:not(:first-child) .card-footer {
border-bottom-left-radius: 0;
}
}
.accordion-button {
position: relative;
display: flex;
align-items: center;
width: 100%;
padding: 1rem 1.25rem;
font-size: 1rem;
color: #212529;
text-align: left;
background-color: #fff;
border: 0;
border-radius: 0;
overflow-anchor: none;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out, border-radius 0.15s ease;
}
@media (prefers-reduced-motion: reduce) {
.accordion-button {
transition: none;
}
}
.accordion-button:not(.collapsed) {
color: #0c63e4;
background-color: #e7f1ff;
box-shadow: inset 0 -1px 0 rgba(0, 0, 0, 0.125);
}
.accordion-button:not(.collapsed)::after {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%230c63e4'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");
transform: rotate(-180deg);
}
.accordion-button::after {
flex-shrink: 0;
width: 1.25rem;
height: 1.25rem;
margin-left: auto;
content: "";
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23212529'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");
background-repeat: no-repeat;
background-size: 1.25rem;
transition: transform 0.2s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.accordion-button::after {
transition: none;
}
}
.accordion-button:hover {
z-index: 2;
}
.accordion-button:focus {
z-index: 3;
border-color: #86b7fe;
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.accordion-header {
margin-bottom: 0;
}
.accordion-item {
background-color: #fff;
border: 1px solid rgba(0, 0, 0, 0.125);
}
.accordion-item:first-of-type {
border-top-left-radius: 0.25rem;
border-top-right-radius: 0.25rem;
}
.accordion-item:first-of-type .accordion-button {
border-top-left-radius: calc(0.25rem - 1px);
border-top-right-radius: calc(0.25rem - 1px);
}
.accordion-item:not(:first-of-type) {
border-top: 0;
}
.accordion-item:last-of-type {
border-bottom-right-radius: 0.25rem;
border-bottom-left-radius: 0.25rem;
}
.accordion-item:last-of-type .accordion-button.collapsed {
border-bottom-right-radius: calc(0.25rem - 1px);
border-bottom-left-radius: calc(0.25rem - 1px);
}
.accordion-item:last-of-type .accordion-collapse {
border-bottom-right-radius: 0.25rem;
border-bottom-left-radius: 0.25rem;
}
.accordion-body {
padding: 1rem 1.25rem;
}
.accordion-flush .accordion-collapse {
border-width: 0;
}
.accordion-flush .accordion-item {
border-right: 0;
border-left: 0;
border-radius: 0;
}
.accordion-flush .accordion-item:first-child {
border-top: 0;
}
.accordion-flush .accordion-item:last-child {
border-bottom: 0;
}
.accordion-flush .accordion-item .accordion-button {
border-radius: 0;
}
.breadcrumb {
display: flex;
flex-wrap: wrap;
padding: 0 0;
margin-bottom: 1rem;
list-style: none;
}
.breadcrumb-item+.breadcrumb-item {
padding-left: 0.5rem;
}
.breadcrumb-item+.breadcrumb-item::before {
float: left;
padding-right: 0.5rem;
color: #6c757d;
content: var(--bs-breadcrumb-divider, "/")
/* rtl: var(--bs-breadcrumb-divider, "/") */
;
}
.breadcrumb-item.active {
color: #6c757d;
}
.pagination {
display: flex;
padding-left: 0;
list-style: none;
}
.page-link {
position: relative;
display: block;
color: #0d6efd;
text-decoration: none;
background-color: #fff;
border: 1px solid #dee2e6;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.page-link {
transition: none;
}
}
.page-link:hover {
z-index: 2;
color: #0a58ca;
background-color: #e9ecef;
border-color: #dee2e6;
}
.page-link:focus {
z-index: 3;
color: #0a58ca;
background-color: #e9ecef;
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
}
.page-item:not(:first-child) .page-link {
margin-left: -1px;
}
.page-item.active .page-link {
z-index: 3;
color: #fff;
background-color: #0d6efd;
border-color: #0d6efd;
}
.page-item.disabled .page-link {
color: #6c757d;
pointer-events: none;
background-color: #fff;
border-color: #dee2e6;
}
.page-link {
padding: 0.375rem 0.75rem;
}
.page-item:first-child .page-link {
border-top-left-radius: 0.25rem;
border-bottom-left-radius: 0.25rem;
}
.page-item:last-child .page-link {
border-top-right-radius: 0.25rem;
border-bottom-right-radius: 0.25rem;
}
.pagination-lg .page-link {
padding: 0.75rem 1.5rem;
font-size: 1.25rem;
}
.pagination-lg .page-item:first-child .page-link {
border-top-left-radius: 0.3rem;
border-bottom-left-radius: 0.3rem;
}
.pagination-lg .page-item:last-child .page-link {
border-top-right-radius: 0.3rem;
border-bottom-right-radius: 0.3rem;
}
.pagination-sm .page-link {
padding: 0.25rem 0.5rem;
font-size: 0.875rem;
}
.pagination-sm .page-item:first-child .page-link {
border-top-left-radius: 0.2rem;
border-bottom-left-radius: 0.2rem;
}
.pagination-sm .page-item:last-child .page-link {
border-top-right-radius: 0.2rem;
border-bottom-right-radius: 0.2rem;
}
.badge {
display: inline-block;
padding: 0.35em 0.65em;
font-size: 0.75em;
font-weight: 700;
line-height: 1;
color: #fff;
text-align: center;
white-space: nowrap;
vertical-align: baseline;
border-radius: 0.25rem;
}
.badge:empty {
display: none;
}
.btn .badge {
position: relative;
top: -1px;
}
.alert {
position: relative;
padding: 1rem 1rem;
margin-bottom: 1rem;
border: 1px solid transparent;
border-radius: 0.25rem;
}
.alert-heading {
color: inherit;
}
.alert-link {
font-weight: 700;
}
.alert-dismissible {
padding-right: 3rem;
}
.alert-dismissible .btn-close {
position: absolute;
top: 0;
right: 0;
z-index: 2;
padding: 1.25rem 1rem;
}
.alert-primary {
color: #084298;
background-color: #cfe2ff;
border-color: #b6d4fe;
}
.alert-primary .alert-link {
color: #06357a;
}
.alert-secondary {
color: #41464b;
background-color: #e2e3e5;
border-color: #d3d6d8;
}
.alert-secondary .alert-link {
color: #34383c;
}
.alert-success {
color: #0f5132;
background-color: #d1e7dd;
border-color: #badbcc;
}
.alert-success .alert-link {
color: #0c4128;
}
.alert-info {
color: #055160;
background-color: #cff4fc;
border-color: #b6effb;
}
.alert-info .alert-link {
color: #04414d;
}
.alert-warning {
color: #664d03;
background-color: #fff3cd;
border-color: #ffecb5;
}
.alert-warning .alert-link {
color: #523e02;
}
.alert-danger {
color: #842029;
background-color: #f8d7da;
border-color: #f5c2c7;
}
.alert-danger .alert-link {
color: #6a1a21;
}
.alert-light {
color: #636464;
background-color: #fefefe;
border-color: #fdfdfe;
}
.alert-light .alert-link {
color: #4f5050;
}
.alert-dark {
color: #141619;
background-color: #d3d3d4;
border-color: #bcbebf;
}
.alert-dark .alert-link {
color: #101214;
}
@-webkit-keyframes progress-bar-stripes {
0% {
background-position-x: 1rem;
}
}
@keyframes progress-bar-stripes {
0% {
background-position-x: 1rem;
}
}
.progress {
display: flex;
height: 1rem;
overflow: hidden;
font-size: 0.75rem;
background-color: #e9ecef;
border-radius: 0.25rem;
}
.progress-bar {
display: flex;
flex-direction: column;
justify-content: center;
overflow: hidden;
color: #fff;
text-align: center;
white-space: nowrap;
background-color: #0d6efd;
transition: width 0.6s ease;
}
@media (prefers-reduced-motion: reduce) {
.progress-bar {
transition: none;
}
}
.progress-bar-striped {
background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent);
background-size: 1rem 1rem;
}
.progress-bar-animated {
-webkit-animation: 1s linear infinite progress-bar-stripes;
animation: 1s linear infinite progress-bar-stripes;
}
@media (prefers-reduced-motion: reduce) {
.progress-bar-animated {
-webkit-animation: none;
animation: none;
}
}
.list-group {
display: flex;
flex-direction: column;
padding-left: 0;
margin-bottom: 0;
border-radius: 0.25rem;
}
.list-group-numbered {
list-style-type: none;
counter-reset: section;
}
.list-group-numbered>li::before {
content: counters(section, ".") ". ";
counter-increment: section;
}
.list-group-item-action {
width: 100%;
color: #495057;
text-align: inherit;
}
.list-group-item-action:hover,
.list-group-item-action:focus {
z-index: 1;
color: #495057;
text-decoration: none;
background-color: #f8f9fa;
}
.list-group-item-action:active {
color: #212529;
background-color: #e9ecef;
}
.list-group-item {
position: relative;
display: block;
padding: 0.5rem 1rem;
color: #212529;
text-decoration: none;
background-color: #fff;
border: 1px solid rgba(0, 0, 0, 0.125);
}
.list-group-item:first-child {
border-top-left-radius: inherit;
border-top-right-radius: inherit;
}
.list-group-item:last-child {
border-bottom-right-radius: inherit;
border-bottom-left-radius: inherit;
}
.list-group-item.disabled,
.list-group-item:disabled {
color: #6c757d;
pointer-events: none;
background-color: #fff;
}
.list-group-item.active {
z-index: 2;
color: #fff;
background-color: #0d6efd;
border-color: #0d6efd;
}
.list-group-item+.list-group-item {
border-top-width: 0;
}
.list-group-item+.list-group-item.active {
margin-top: -1px;
border-top-width: 1px;
}
.list-group-horizontal {
flex-direction: row;
}
.list-group-horizontal>.list-group-item:first-child {
border-bottom-left-radius: 0.25rem;
border-top-right-radius: 0;
}
.list-group-horizontal>.list-group-item:last-child {
border-top-right-radius: 0.25rem;
border-bottom-left-radius: 0;
}
.list-group-horizontal>.list-group-item.active {
margin-top: 0;
}
.list-group-horizontal>.list-group-item+.list-group-item {
border-top-width: 1px;
border-left-width: 0;
}
.list-group-horizontal>.list-group-item+.list-group-item.active {
margin-left: -1px;
border-left-width: 1px;
}
@media (min-width: 576px) {
.list-group-horizontal-sm {
flex-direction: row;
}
.list-group-horizontal-sm>.list-group-item:first-child {
border-bottom-left-radius: 0.25rem;
border-top-right-radius: 0;
}
.list-group-horizontal-sm>.list-group-item:last-child {
border-top-right-radius: 0.25rem;
border-bottom-left-radius: 0;
}
.list-group-horizontal-sm>.list-group-item.active {
margin-top: 0;
}
.list-group-horizontal-sm>.list-group-item+.list-group-item {
border-top-width: 1px;
border-left-width: 0;
}
.list-group-horizontal-sm>.list-group-item+.list-group-item.active {
margin-left: -1px;
border-left-width: 1px;
}
}
@media (min-width: 768px) {
.list-group-horizontal-md {
flex-direction: row;
}
.list-group-horizontal-md>.list-group-item:first-child {
border-bottom-left-radius: 0.25rem;
border-top-right-radius: 0;
}
.list-group-horizontal-md>.list-group-item:last-child {
border-top-right-radius: 0.25rem;
border-bottom-left-radius: 0;
}
.list-group-horizontal-md>.list-group-item.active {
margin-top: 0;
}
.list-group-horizontal-md>.list-group-item+.list-group-item {
border-top-width: 1px;
border-left-width: 0;
}
.list-group-horizontal-md>.list-group-item+.list-group-item.active {
margin-left: -1px;
border-left-width: 1px;
}
}
@media (min-width: 992px) {
.list-group-horizontal-lg {
flex-direction: row;
}
.list-group-horizontal-lg>.list-group-item:first-child {
border-bottom-left-radius: 0.25rem;
border-top-right-radius: 0;
}
.list-group-horizontal-lg>.list-group-item:last-child {
border-top-right-radius: 0.25rem;
border-bottom-left-radius: 0;
}
.list-group-horizontal-lg>.list-group-item.active {
margin-top: 0;
}
.list-group-horizontal-lg>.list-group-item+.list-group-item {
border-top-width: 1px;
border-left-width: 0;
}
.list-group-horizontal-lg>.list-group-item+.list-group-item.active {
margin-left: -1px;
border-left-width: 1px;
}
}
@media (min-width: 1200px) {
.list-group-horizontal-xl {
flex-direction: row;
}
.list-group-horizontal-xl>.list-group-item:first-child {
border-bottom-left-radius: 0.25rem;
border-top-right-radius: 0;
}
.list-group-horizontal-xl>.list-group-item:last-child {
border-top-right-radius: 0.25rem;
border-bottom-left-radius: 0;
}
.list-group-horizontal-xl>.list-group-item.active {
margin-top: 0;
}
.list-group-horizontal-xl>.list-group-item+.list-group-item {
border-top-width: 1px;
border-left-width: 0;
}
.list-group-horizontal-xl>.list-group-item+.list-group-item.active {
margin-left: -1px;
border-left-width: 1px;
}
}
@media (min-width: 1400px) {
.list-group-horizontal-xxl {
flex-direction: row;
}
.list-group-horizontal-xxl>.list-group-item:first-child {
border-bottom-left-radius: 0.25rem;
border-top-right-radius: 0;
}
.list-group-horizontal-xxl>.list-group-item:last-child {
border-top-right-radius: 0.25rem;
border-bottom-left-radius: 0;
}
.list-group-horizontal-xxl>.list-group-item.active {
margin-top: 0;
}
.list-group-horizontal-xxl>.list-group-item+.list-group-item {
border-top-width: 1px;
border-left-width: 0;
}
.list-group-horizontal-xxl>.list-group-item+.list-group-item.active {
margin-left: -1px;
border-left-width: 1px;
}
}
.list-group-flush {
border-radius: 0;
}
.list-group-flush>.list-group-item {
border-width: 0 0 1px;
}
.list-group-flush>.list-group-item:last-child {
border-bottom-width: 0;
}
.list-group-item-primary {
color: #084298;
background-color: #cfe2ff;
}
.list-group-item-primary.list-group-item-action:hover,
.list-group-item-primary.list-group-item-action:focus {
color: #084298;
background-color: #bacbe6;
}
.list-group-item-primary.list-group-item-action.active {
color: #fff;
background-color: #084298;
border-color: #084298;
}
.list-group-item-secondary {
color: #41464b;
background-color: #e2e3e5;
}
.list-group-item-secondary.list-group-item-action:hover,
.list-group-item-secondary.list-group-item-action:focus {
color: #41464b;
background-color: #cbccce;
}
.list-group-item-secondary.list-group-item-action.active {
color: #fff;
background-color: #41464b;
border-color: #41464b;
}
.list-group-item-success {
color: #0f5132;
background-color: #d1e7dd;
}
.list-group-item-success.list-group-item-action:hover,
.list-group-item-success.list-group-item-action:focus {
color: #0f5132;
background-color: #bcd0c7;
}
.list-group-item-success.list-group-item-action.active {
color: #fff;
background-color: #0f5132;
border-color: #0f5132;
}
.list-group-item-info {
color: #055160;
background-color: #cff4fc;
}
.list-group-item-info.list-group-item-action:hover,
.list-group-item-info.list-group-item-action:focus {
color: #055160;
background-color: #badce3;
}
.list-group-item-info.list-group-item-action.active {
color: #fff;
background-color: #055160;
border-color: #055160;
}
.list-group-item-warning {
color: #664d03;
background-color: #fff3cd;
}
.list-group-item-warning.list-group-item-action:hover,
.list-group-item-warning.list-group-item-action:focus {
color: #664d03;
background-color: #e6dbb9;
}
.list-group-item-warning.list-group-item-action.active {
color: #fff;
background-color: #664d03;
border-color: #664d03;
}
.list-group-item-danger {
color: #842029;
background-color: #f8d7da;
}
.list-group-item-danger.list-group-item-action:hover,
.list-group-item-danger.list-group-item-action:focus {
color: #842029;
background-color: #dfc2c4;
}
.list-group-item-danger.list-group-item-action.active {
color: #fff;
background-color: #842029;
border-color: #842029;
}
.list-group-item-light {
color: #636464;
background-color: #fefefe;
}
.list-group-item-light.list-group-item-action:hover,
.list-group-item-light.list-group-item-action:focus {
color: #636464;
background-color: #e5e5e5;
}
.list-group-item-light.list-group-item-action.active {
color: #fff;
background-color: #636464;
border-color: #636464;
}
.list-group-item-dark {
color: #141619;
background-color: #d3d3d4;
}
.list-group-item-dark.list-group-item-action:hover,
.list-group-item-dark.list-group-item-action:focus {
color: #141619;
background-color: #bebebf;
}
.list-group-item-dark.list-group-item-action.active {
color: #fff;
background-color: #141619;
border-color: #141619;
}
.btn-close {
box-sizing: content-box;
width: 1em;
height: 1em;
padding: 0.25em 0.25em;
color: #000;
background: transparent url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 011.414 0L8 6.586 14.293.293a1 1 0 111.414 1.414L9.414 8l6.293 6.293a1 1 0 01-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 01-1.414-1.414L6.586 8 .293 1.707a1 1 0 010-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat;
border: 0;
border-radius: 0.25rem;
opacity: 0.5;
}
.btn-close:hover {
color: #000;
text-decoration: none;
opacity: 0.75;
}
.btn-close:focus {
outline: 0;
box-shadow: 0 0 0 0.25rem rgba(13, 110, 253, 0.25);
opacity: 1;
}
.btn-close:disabled,
.btn-close.disabled {
pointer-events: none;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
opacity: 0.25;
}
.btn-close-white {
filter: invert(1) grayscale(100%) brightness(200%);
}
.toast {
width: 350px;
max-width: 100%;
font-size: 0.875rem;
pointer-events: auto;
background-color: rgba(255, 255, 255, 0.85);
background-clip: padding-box;
border: 1px solid rgba(0, 0, 0, 0.1);
box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);
border-radius: 0.25rem;
}
.toast.showing {
opacity: 0;
}
.toast:not(.show) {
display: none;
}
.toast-container {
width: -webkit-max-content;
width: -moz-max-content;
width: max-content;
max-width: 100%;
pointer-events: none;
}
.toast-container> :not(:last-child) {
margin-bottom: 0.75rem;
}
.toast-header {
display: flex;
align-items: center;
padding: 0.5rem 0.75rem;
color: #6c757d;
background-color: rgba(255, 255, 255, 0.85);
background-clip: padding-box;
border-bottom: 1px solid rgba(0, 0, 0, 0.05);
border-top-left-radius: calc(0.25rem - 1px);
border-top-right-radius: calc(0.25rem - 1px);
}
.toast-header .btn-close {
margin-right: -0.375rem;
margin-left: 0.75rem;
}
.toast-body {
padding: 0.75rem;
word-wrap: break-word;
}
.modal {
position: fixed;
top: 0;
left: 0;
z-index: 1055;
display: none;
width: 100%;
height: 100%;
overflow-x: hidden;
overflow-y: auto;
outline: 0;
}
.modal-dialog {
position: relative;
width: auto;
margin: 0.5rem;
pointer-events: none;
}
.modal.fade .modal-dialog {
transition: transform 0.3s ease-out;
transform: translate(0, -50px);
}
@media (prefers-reduced-motion: reduce) {
.modal.fade .modal-dialog {
transition: none;
}
}
.modal.show .modal-dialog {
transform: none;
}
.modal.modal-static .modal-dialog {
transform: scale(1.02);
}
.modal-dialog-scrollable {
height: calc(100% - 1rem);
}
.modal-dialog-scrollable .modal-content {
max-height: 100%;
overflow: hidden;
}
.modal-dialog-scrollable .modal-body {
overflow-y: auto;
}
.modal-dialog-centered {
display: flex;
align-items: center;
min-height: calc(100% - 1rem);
}
.modal-content {
position: relative;
display: flex;
flex-direction: column;
width: 100%;
pointer-events: auto;
background-color: #fff;
background-clip: padding-box;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 0.3rem;
outline: 0;
}
.modal-backdrop {
position: fixed;
top: 0;
left: 0;
z-index: 1050;
width: 100vw;
height: 100vh;
background-color: #000;
}
.modal-backdrop.fade {
opacity: 0;
}
.modal-backdrop.show {
opacity: 0.5;
}
.modal-header {
display: flex;
flex-shrink: 0;
align-items: center;
justify-content: space-between;
padding: 1rem 1rem;
border-bottom: 1px solid #dee2e6;
border-top-left-radius: calc(0.3rem - 1px);
border-top-right-radius: calc(0.3rem - 1px);
}
.modal-header .btn-close {
padding: 0.5rem 0.5rem;
margin: -0.5rem -0.5rem -0.5rem auto;
}
.modal-title {
margin-bottom: 0;
line-height: 1.5;
}
.modal-body {
position: relative;
flex: 1 1 auto;
padding: 1rem;
}
.modal-footer {
display: flex;
flex-wrap: wrap;
flex-shrink: 0;
align-items: center;
justify-content: flex-end;
padding: 0.75rem;
border-top: 1px solid #dee2e6;
border-bottom-right-radius: calc(0.3rem - 1px);
border-bottom-left-radius: calc(0.3rem - 1px);
}
.modal-footer>* {
margin: 0.25rem;
}
@media (min-width: 576px) {
.modal-dialog {
max-width: 500px;
margin: 1.75rem auto;
}
.modal-dialog-scrollable {
height: calc(100% - 3.5rem);
}
.modal-dialog-centered {
min-height: calc(100% - 3.5rem);
}
.modal-sm {
max-width: 300px;
}
}
@media (min-width: 992px) {
.modal-lg,
.modal-xl {
max-width: 800px;
}
}
@media (min-width: 1200px) {
.modal-xl {
max-width: 1140px;
}
}
.modal-fullscreen {
width: 100vw;
max-width: none;
height: 100%;
margin: 0;
}
.modal-fullscreen .modal-content {
height: 100%;
border: 0;
border-radius: 0;
}
.modal-fullscreen .modal-header {
border-radius: 0;
}
.modal-fullscreen .modal-body {
overflow-y: auto;
}
.modal-fullscreen .modal-footer {
border-radius: 0;
}
@media (max-width: 575.98px) {
.modal-fullscreen-sm-down {
width: 100vw;
max-width: none;
height: 100%;
margin: 0;
}
.modal-fullscreen-sm-down .modal-content {
height: 100%;
border: 0;
border-radius: 0;
}
.modal-fullscreen-sm-down .modal-header {
border-radius: 0;
}
.modal-fullscreen-sm-down .modal-body {
overflow-y: auto;
}
.modal-fullscreen-sm-down .modal-footer {
border-radius: 0;
}
}
@media (max-width: 767.98px) {
.modal-fullscreen-md-down {
width: 100vw;
max-width: none;
height: 100%;
margin: 0;
}
.modal-fullscreen-md-down .modal-content {
height: 100%;
border: 0;
border-radius: 0;
}
.modal-fullscreen-md-down .modal-header {
border-radius: 0;
}
.modal-fullscreen-md-down .modal-body {
overflow-y: auto;
}
.modal-fullscreen-md-down .modal-footer {
border-radius: 0;
}
}
@media (max-width: 991.98px) {
.modal-fullscreen-lg-down {
width: 100vw;
max-width: none;
height: 100%;
margin: 0;
}
.modal-fullscreen-lg-down .modal-content {
height: 100%;
border: 0;
border-radius: 0;
}
.modal-fullscreen-lg-down .modal-header {
border-radius: 0;
}
.modal-fullscreen-lg-down .modal-body {
overflow-y: auto;
}
.modal-fullscreen-lg-down .modal-footer {
border-radius: 0;
}
}
@media (max-width: 1199.98px) {
.modal-fullscreen-xl-down {
width: 100vw;
max-width: none;
height: 100%;
margin: 0;
}
.modal-fullscreen-xl-down .modal-content {
height: 100%;
border: 0;
border-radius: 0;
}
.modal-fullscreen-xl-down .modal-header {
border-radius: 0;
}
.modal-fullscreen-xl-down .modal-body {
overflow-y: auto;
}
.modal-fullscreen-xl-down .modal-footer {
border-radius: 0;
}
}
@media (max-width: 1399.98px) {
.modal-fullscreen-xxl-down {
width: 100vw;
max-width: none;
height: 100%;
margin: 0;
}
.modal-fullscreen-xxl-down .modal-content {
height: 100%;
border: 0;
border-radius: 0;
}
.modal-fullscreen-xxl-down .modal-header {
border-radius: 0;
}
.modal-fullscreen-xxl-down .modal-body {
overflow-y: auto;
}
.modal-fullscreen-xxl-down .modal-footer {
border-radius: 0;
}
}
.tooltip {
position: absolute;
z-index: 1080;
display: block;
margin: 0;
font-family: var(--bs-font-sans-serif);
font-style: normal;
font-weight: 400;
line-height: 1.5;
text-align: left;
text-align: start;
text-decoration: none;
text-shadow: none;
text-transform: none;
letter-spacing: normal;
word-break: normal;
word-spacing: normal;
white-space: normal;
line-break: auto;
font-size: 0.875rem;
word-wrap: break-word;
opacity: 0;
}
.tooltip.show {
opacity: 0.9;
}
.tooltip .tooltip-arrow {
position: absolute;
display: block;
width: 0.8rem;
height: 0.4rem;
}
.tooltip .tooltip-arrow::before {
position: absolute;
content: "";
border-color: transparent;
border-style: solid;
}
.bs-tooltip-top,
.bs-tooltip-auto[data-popper-placement^=top] {
padding: 0.4rem 0;
}
.bs-tooltip-top .tooltip-arrow,
.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow {
bottom: 0;
}
.bs-tooltip-top .tooltip-arrow::before,
.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before {
top: -1px;
border-width: 0.4rem 0.4rem 0;
border-top-color: #000;
}
.bs-tooltip-end,
.bs-tooltip-auto[data-popper-placement^=right] {
padding: 0 0.4rem;
}
.bs-tooltip-end .tooltip-arrow,
.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow {
left: 0;
width: 0.4rem;
height: 0.8rem;
}
.bs-tooltip-end .tooltip-arrow::before,
.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before {
right: -1px;
border-width: 0.4rem 0.4rem 0.4rem 0;
border-right-color: #000;
}
.bs-tooltip-bottom,
.bs-tooltip-auto[data-popper-placement^=bottom] {
padding: 0.4rem 0;
}
.bs-tooltip-bottom .tooltip-arrow,
.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow {
top: 0;
}
.bs-tooltip-bottom .tooltip-arrow::before,
.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before {
bottom: -1px;
border-width: 0 0.4rem 0.4rem;
border-bottom-color: #000;
}
.bs-tooltip-start,
.bs-tooltip-auto[data-popper-placement^=left] {
padding: 0 0.4rem;
}
.bs-tooltip-start .tooltip-arrow,
.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow {
right: 0;
width: 0.4rem;
height: 0.8rem;
}
.bs-tooltip-start .tooltip-arrow::before,
.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before {
left: -1px;
border-width: 0.4rem 0 0.4rem 0.4rem;
border-left-color: #000;
}
.tooltip-inner {
max-width: 200px;
padding: 0.25rem 0.5rem;
color: #fff;
text-align: center;
background-color: #000;
border-radius: 0.25rem;
}
.popover {
position: absolute;
top: 0;
left: 0
/* rtl:ignore */
;
z-index: 1070;
display: block;
max-width: 276px;
font-family: var(--bs-font-sans-serif);
font-style: normal;
font-weight: 400;
line-height: 1.5;
text-align: left;
text-align: start;
text-decoration: none;
text-shadow: none;
text-transform: none;
letter-spacing: normal;
word-break: normal;
word-spacing: normal;
white-space: normal;
line-break: auto;
font-size: 0.875rem;
word-wrap: break-word;
background-color: #fff;
background-clip: padding-box;
border: 1px solid rgba(0, 0, 0, 0.2);
border-radius: 0.3rem;
}
.popover .popover-arrow {
position: absolute;
display: block;
width: 1rem;
height: 0.5rem;
}
.popover .popover-arrow::before,
.popover .popover-arrow::after {
position: absolute;
display: block;
content: "";
border-color: transparent;
border-style: solid;
}
.bs-popover-top>.popover-arrow,
.bs-popover-auto[data-popper-placement^=top]>.popover-arrow {
bottom: calc(-0.5rem - 1px);
}
.bs-popover-top>.popover-arrow::before,
.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before {
bottom: 0;
border-width: 0.5rem 0.5rem 0;
border-top-color: rgba(0, 0, 0, 0.25);
}
.bs-popover-top>.popover-arrow::after,
.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after {
bottom: 1px;
border-width: 0.5rem 0.5rem 0;
border-top-color: #fff;
}
.bs-popover-end>.popover-arrow,
.bs-popover-auto[data-popper-placement^=right]>.popover-arrow {
left: calc(-0.5rem - 1px);
width: 0.5rem;
height: 1rem;
}
.bs-popover-end>.popover-arrow::before,
.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before {
left: 0;
border-width: 0.5rem 0.5rem 0.5rem 0;
border-right-color: rgba(0, 0, 0, 0.25);
}
.bs-popover-end>.popover-arrow::after,
.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after {
left: 1px;
border-width: 0.5rem 0.5rem 0.5rem 0;
border-right-color: #fff;
}
.bs-popover-bottom>.popover-arrow,
.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow {
top: calc(-0.5rem - 1px);
}
.bs-popover-bottom>.popover-arrow::before,
.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before {
top: 0;
border-width: 0 0.5rem 0.5rem 0.5rem;
border-bottom-color: rgba(0, 0, 0, 0.25);
}
.bs-popover-bottom>.popover-arrow::after,
.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after {
top: 1px;
border-width: 0 0.5rem 0.5rem 0.5rem;
border-bottom-color: #fff;
}
.bs-popover-bottom .popover-header::before,
.bs-popover-auto[data-popper-placement^=bottom] .popover-header::before {
position: absolute;
top: 0;
left: 50%;
display: block;
width: 1rem;
margin-left: -0.5rem;
content: "";
border-bottom: 1px solid #f0f0f0;
}
.bs-popover-start>.popover-arrow,
.bs-popover-auto[data-popper-placement^=left]>.popover-arrow {
right: calc(-0.5rem - 1px);
width: 0.5rem;
height: 1rem;
}
.bs-popover-start>.popover-arrow::before,
.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before {
right: 0;
border-width: 0.5rem 0 0.5rem 0.5rem;
border-left-color: rgba(0, 0, 0, 0.25);
}
.bs-popover-start>.popover-arrow::after,
.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after {
right: 1px;
border-width: 0.5rem 0 0.5rem 0.5rem;
border-left-color: #fff;
}
.popover-header {
padding: 0.5rem 1rem;
margin-bottom: 0;
font-size: 1rem;
background-color: #f0f0f0;
border-bottom: 1px solid rgba(0, 0, 0, 0.2);
border-top-left-radius: calc(0.3rem - 1px);
border-top-right-radius: calc(0.3rem - 1px);
}
.popover-header:empty {
display: none;
}
.popover-body {
padding: 1rem 1rem;
color: #212529;
}
.carousel {
position: relative;
}
.carousel.pointer-event {
touch-action: pan-y;
}
.carousel-inner {
position: relative;
width: 100%;
overflow: hidden;
}
.carousel-inner::after {
display: block;
clear: both;
content: "";
}
.carousel-item {
position: relative;
display: none;
float: left;
width: 100%;
margin-right: -100%;
-webkit-backface-visibility: hidden;
backface-visibility: hidden;
transition: transform 0.6s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.carousel-item {
transition: none;
}
}
.carousel-item.active,
.carousel-item-next,
.carousel-item-prev {
display: block;
}
/* rtl:begin:ignore */
.carousel-item-next:not(.carousel-item-start),
.active.carousel-item-end {
transform: translateX(100%);
}
.carousel-item-prev:not(.carousel-item-end),
.active.carousel-item-start {
transform: translateX(-100%);
}
/* rtl:end:ignore */
.carousel-fade .carousel-item {
opacity: 0;
transition-property: opacity;
transform: none;
}
.carousel-fade .carousel-item.active,
.carousel-fade .carousel-item-next.carousel-item-start,
.carousel-fade .carousel-item-prev.carousel-item-end {
z-index: 1;
opacity: 1;
}
.carousel-fade .active.carousel-item-start,
.carousel-fade .active.carousel-item-end {
z-index: 0;
opacity: 0;
transition: opacity 0s 0.6s;
}
@media (prefers-reduced-motion: reduce) {
.carousel-fade .active.carousel-item-start,
.carousel-fade .active.carousel-item-end {
transition: none;
}
}
.carousel-control-prev,
.carousel-control-next {
position: absolute;
top: 0;
bottom: 0;
z-index: 1;
display: flex;
align-items: center;
justify-content: center;
width: 15%;
padding: 0;
color: #fff;
text-align: center;
background: none;
border: 0;
opacity: 0.5;
transition: opacity 0.15s ease;
}
@media (prefers-reduced-motion: reduce) {
.carousel-control-prev,
.carousel-control-next {
transition: none;
}
}
.carousel-control-prev:hover,
.carousel-control-prev:focus,
.carousel-control-next:hover,
.carousel-control-next:focus {
color: #fff;
text-decoration: none;
outline: 0;
opacity: 0.9;
}
.carousel-control-prev {
left: 0;
}
.carousel-control-next {
right: 0;
}
.carousel-control-prev-icon,
.carousel-control-next-icon {
display: inline-block;
width: 2rem;
height: 2rem;
background-repeat: no-repeat;
background-position: 50%;
background-size: 100% 100%;
}
/* rtl:options: {
"autoRename": true,
"stringMap":[ {
"name" : "prev-next",
"search" : "prev",
"replace" : "next"
} ]
} */
.carousel-control-prev-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e");
}
.carousel-control-next-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");
}
.carousel-indicators {
position: absolute;
right: 0;
bottom: 0;
left: 0;
z-index: 2;
display: flex;
justify-content: center;
padding: 0;
margin-right: 15%;
margin-bottom: 1rem;
margin-left: 15%;
list-style: none;
}
.carousel-indicators [data-bs-target] {
box-sizing: content-box;
flex: 0 1 auto;
width: 30px;
height: 3px;
padding: 0;
margin-right: 3px;
margin-left: 3px;
text-indent: -999px;
cursor: pointer;
background-color: #fff;
background-clip: padding-box;
border: 0;
border-top: 10px solid transparent;
border-bottom: 10px solid transparent;
opacity: 0.5;
transition: opacity 0.6s ease;
}
@media (prefers-reduced-motion: reduce) {
.carousel-indicators [data-bs-target] {
transition: none;
}
}
.carousel-indicators .active {
opacity: 1;
}
.carousel-caption {
position: absolute;
right: 15%;
bottom: 1.25rem;
left: 15%;
padding-top: 1.25rem;
padding-bottom: 1.25rem;
color: #fff;
text-align: center;
}
.carousel-dark .carousel-control-prev-icon,
.carousel-dark .carousel-control-next-icon {
filter: invert(1) grayscale(100);
}
.carousel-dark .carousel-indicators [data-bs-target] {
background-color: #000;
}
.carousel-dark .carousel-caption {
color: #000;
}
@-webkit-keyframes spinner-border {
to {
transform: rotate(360deg)
/* rtl:ignore */
;
}
}
@keyframes spinner-border {
to {
transform: rotate(360deg)
/* rtl:ignore */
;
}
}
.spinner-border {
display: inline-block;
width: 2rem;
height: 2rem;
vertical-align: -0.125em;
border: 0.25em solid currentColor;
border-right-color: transparent;
border-radius: 50%;
-webkit-animation: 0.75s linear infinite spinner-border;
animation: 0.75s linear infinite spinner-border;
}
.spinner-border-sm {
width: 1rem;
height: 1rem;
border-width: 0.2em;
}
@-webkit-keyframes spinner-grow {
0% {
transform: scale(0);
}
50% {
opacity: 1;
transform: none;
}
}
@keyframes spinner-grow {
0% {
transform: scale(0);
}
50% {
opacity: 1;
transform: none;
}
}
.spinner-grow {
display: inline-block;
width: 2rem;
height: 2rem;
vertical-align: -0.125em;
background-color: currentColor;
border-radius: 50%;
opacity: 0;
-webkit-animation: 0.75s linear infinite spinner-grow;
animation: 0.75s linear infinite spinner-grow;
}
.spinner-grow-sm {
width: 1rem;
height: 1rem;
}
@media (prefers-reduced-motion: reduce) {
.spinner-border,
.spinner-grow {
-webkit-animation-duration: 1.5s;
animation-duration: 1.5s;
}
}
.offcanvas {
position: fixed;
bottom: 0;
z-index: 1045;
display: flex;
flex-direction: column;
max-width: 100%;
visibility: hidden;
background-color: #fff;
background-clip: padding-box;
outline: 0;
transition: transform 0.3s ease-in-out;
}
@media (prefers-reduced-motion: reduce) {
.offcanvas {
transition: none;
}
}
.offcanvas-backdrop {
position: fixed;
top: 0;
left: 0;
z-index: 1040;
width: 100vw;
height: 100vh;
background-color: #000;
}
.offcanvas-backdrop.fade {
opacity: 0;
}
.offcanvas-backdrop.show {
opacity: 0.5;
}
.offcanvas-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 1rem 1rem;
}
.offcanvas-header .btn-close {
padding: 0.5rem 0.5rem;
margin-top: -0.5rem;
margin-right: -0.5rem;
margin-bottom: -0.5rem;
}
.offcanvas-title {
margin-bottom: 0;
line-height: 1.5;
}
.offcanvas-body {
flex-grow: 1;
padding: 1rem 1rem;
overflow-y: auto;
}
.offcanvas-start {
top: 0;
left: 0;
width: 400px;
border-right: 1px solid rgba(0, 0, 0, 0.2);
transform: translateX(-100%);
}
.offcanvas-end {
top: 0;
right: 0;
width: 400px;
border-left: 1px solid rgba(0, 0, 0, 0.2);
transform: translateX(100%);
}
.offcanvas-top {
top: 0;
right: 0;
left: 0;
height: 30vh;
max-height: 100%;
border-bottom: 1px solid rgba(0, 0, 0, 0.2);
transform: translateY(-100%);
}
.offcanvas-bottom {
right: 0;
left: 0;
height: 30vh;
max-height: 100%;
border-top: 1px solid rgba(0, 0, 0, 0.2);
transform: translateY(100%);
}
.offcanvas.show {
transform: none;
}
.placeholder {
display: inline-block;
min-height: 1em;
vertical-align: middle;
cursor: wait;
background-color: currentColor;
opacity: 0.5;
}
.placeholder.btn::before {
display: inline-block;
content: "";
}
.placeholder-xs {
min-height: 0.6em;
}
.placeholder-sm {
min-height: 0.8em;
}
.placeholder-lg {
min-height: 1.2em;
}
.placeholder-glow .placeholder {
-webkit-animation: placeholder-glow 2s ease-in-out infinite;
animation: placeholder-glow 2s ease-in-out infinite;
}
@-webkit-keyframes placeholder-glow {
50% {
opacity: 0.2;
}
}
@keyframes placeholder-glow {
50% {
opacity: 0.2;
}
}
.placeholder-wave {
-webkit-mask-image: linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);
mask-image: linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);
-webkit-mask-size: 200% 100%;
mask-size: 200% 100%;
-webkit-animation: placeholder-wave 2s linear infinite;
animation: placeholder-wave 2s linear infinite;
}
@-webkit-keyframes placeholder-wave {
100% {
-webkit-mask-position: -200% 0%;
mask-position: -200% 0%;
}
}
@keyframes placeholder-wave {
100% {
-webkit-mask-position: -200% 0%;
mask-position: -200% 0%;
}
}
.clearfix::after {
display: block;
clear: both;
content: "";
}
.link-primary {
color: #0d6efd;
}
.link-primary:hover,
.link-primary:focus {
color: #0a58ca;
}
.link-secondary {
color: #6c757d;
}
.link-secondary:hover,
.link-secondary:focus {
color: #565e64;
}
.link-success {
color: #198754;
}
.link-success:hover,
.link-success:focus {
color: #146c43;
}
.link-info {
color: #0dcaf0;
}
.link-info:hover,
.link-info:focus {
color: #3dd5f3;
}
.link-warning {
color: #ffc107;
}
.link-warning:hover,
.link-warning:focus {
color: #ffcd39;
}
.link-danger {
color: #dc3545;
}
.link-danger:hover,
.link-danger:focus {
color: #b02a37;
}
.link-light {
color: #f8f9fa;
}
.link-light:hover,
.link-light:focus {
color: #f9fafb;
}
.link-dark {
color: #212529;
}
.link-dark:hover,
.link-dark:focus {
color: #1a1e21;
}
.ratio {
position: relative;
width: 100%;
}
.ratio::before {
display: block;
padding-top: var(--bs-aspect-ratio);
content: "";
}
.ratio>* {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
.ratio-1x1 {
--bs-aspect-ratio: 100%;
}
.ratio-4x3 {
--bs-aspect-ratio: 75%;
}
.ratio-16x9 {
--bs-aspect-ratio: 56.25%;
}
.ratio-21x9 {
--bs-aspect-ratio: 42.8571428571%;
}
.fixed-top {
position: fixed;
top: 0;
right: 0;
left: 0;
z-index: 1030;
}
.fixed-bottom {
position: fixed;
right: 0;
bottom: 0;
left: 0;
z-index: 1030;
}
.sticky-top {
position: -webkit-sticky;
position: sticky;
top: 0;
z-index: 1020;
}
@media (min-width: 576px) {
.sticky-sm-top {
position: -webkit-sticky;
position: sticky;
top: 0;
z-index: 1020;
}
}
@media (min-width: 768px) {
.sticky-md-top {
position: -webkit-sticky;
position: sticky;
top: 0;
z-index: 1020;
}
}
@media (min-width: 992px) {
.sticky-lg-top {
position: -webkit-sticky;
position: sticky;
top: 0;
z-index: 1020;
}
}
@media (min-width: 1200px) {
.sticky-xl-top {
position: -webkit-sticky;
position: sticky;
top: 0;
z-index: 1020;
}
}
@media (min-width: 1400px) {
.sticky-xxl-top {
position: -webkit-sticky;
position: sticky;
top: 0;
z-index: 1020;
}
}
.hstack {
display: flex;
flex-direction: row;
align-items: center;
align-self: stretch;
}
.vstack {
display: flex;
flex: 1 1 auto;
flex-direction: column;
align-self: stretch;
}
.visually-hidden,
.visually-hidden-focusable:not(:focus):not(:focus-within) {
position: absolute !important;
width: 1px !important;
height: 1px !important;
padding: 0 !important;
margin: -1px !important;
overflow: hidden !important;
clip: rect(0, 0, 0, 0) !important;
white-space: nowrap !important;
border: 0 !important;
}
.stretched-link::after {
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 1;
content: "";
}
.text-truncate {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.vr {
display: inline-block;
align-self: stretch;
width: 1px;
min-height: 1em;
background-color: currentColor;
opacity: 0.25;
}
.align-baseline {
vertical-align: baseline !important;
}
.align-top {
vertical-align: top !important;
}
.align-middle {
vertical-align: middle !important;
}
.align-bottom {
vertical-align: bottom !important;
}
.align-text-bottom {
vertical-align: text-bottom !important;
}
.align-text-top {
vertical-align: text-top !important;
}
.float-start {
float: left !important;
}
.float-end {
float: right !important;
}
.float-none {
float: none !important;
}
.opacity-0 {
opacity: 0 !important;
}
.opacity-25 {
opacity: 0.25 !important;
}
.opacity-50 {
opacity: 0.5 !important;
}
.opacity-75 {
opacity: 0.75 !important;
}
.opacity-100 {
opacity: 1 !important;
}
.overflow-auto {
overflow: auto !important;
}
.overflow-hidden {
overflow: hidden !important;
}
.overflow-visible {
overflow: visible !important;
}
.overflow-scroll {
overflow: scroll !important;
}
.d-inline {
display: inline !important;
}
.d-inline-block {
display: inline-block !important;
}
.d-block {
display: block !important;
}
.d-grid {
display: grid !important;
}
.d-table {
display: table !important;
}
.d-table-row {
display: table-row !important;
}
.d-table-cell {
display: table-cell !important;
}
.d-flex {
display: flex !important;
}
.d-inline-flex {
display: inline-flex !important;
}
.d-none {
display: none !important;
}
.shadow {
box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15) !important;
}
.shadow-sm {
box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075) !important;
}
.shadow-lg {
box-shadow: 0 1rem 3rem rgba(0, 0, 0, 0.175) !important;
}
.shadow-none {
box-shadow: none !important;
}
.position-static {
position: static !important;
}
.position-relative {
position: relative !important;
}
.position-absolute {
position: absolute !important;
}
.position-fixed {
position: fixed !important;
}
.position-sticky {
position: -webkit-sticky !important;
position: sticky !important;
}
.top-0 {
top: 0 !important;
}
.top-50 {
top: 50% !important;
}
.top-100 {
top: 100% !important;
}
.bottom-0 {
bottom: 0 !important;
}
.bottom-50 {
bottom: 50% !important;
}
.bottom-100 {
bottom: 100% !important;
}
.start-0 {
left: 0 !important;
}
.start-50 {
left: 50% !important;
}
.start-100 {
left: 100% !important;
}
.end-0 {
right: 0 !important;
}
.end-50 {
right: 50% !important;
}
.end-100 {
right: 100% !important;
}
.translate-middle {
transform: translate(-50%, -50%) !important;
}
.translate-middle-x {
transform: translateX(-50%) !important;
}
.translate-middle-y {
transform: translateY(-50%) !important;
}
.border {
border: 1px solid #dee2e6 !important;
}
.border-0 {
border: 0 !important;
}
.border-top {
border-top: 1px solid #dee2e6 !important;
}
.border-top-0 {
border-top: 0 !important;
}
.border-end {
border-right: 1px solid #dee2e6 !important;
}
.border-end-0 {
border-right: 0 !important;
}
.border-bottom {
border-bottom: 1px solid #dee2e6 !important;
}
.border-bottom-0 {
border-bottom: 0 !important;
}
.border-start {
border-left: 1px solid #dee2e6 !important;
}
.border-start-0 {
border-left: 0 !important;
}
.border-primary {
border-color: #0d6efd !important;
}
.border-secondary {
border-color: #6c757d !important;
}
.border-success {
border-color: #198754 !important;
}
.border-info {
border-color: #0dcaf0 !important;
}
.border-warning {
border-color: #ffc107 !important;
}
.border-danger {
border-color: #dc3545 !important;
}
.border-light {
border-color: #f8f9fa !important;
}
.border-dark {
border-color: #212529 !important;
}
.border-white {
border-color: #fff !important;
}
.border-1 {
border-width: 1px !important;
}
.border-2 {
border-width: 2px !important;
}
.border-3 {
border-width: 3px !important;
}
.border-4 {
border-width: 4px !important;
}
.border-5 {
border-width: 5px !important;
}
.w-25 {
width: 25% !important;
}
.w-50 {
width: 50% !important;
}
.w-75 {
width: 75% !important;
}
.w-100 {
width: 100% !important;
}
.w-auto {
width: auto !important;
}
.mw-100 {
max-width: 100% !important;
}
.vw-100 {
width: 100vw !important;
}
.min-vw-100 {
min-width: 100vw !important;
}
.h-25 {
height: 25% !important;
}
.h-50 {
height: 50% !important;
}
.h-75 {
height: 75% !important;
}
.h-100 {
height: 100% !important;
}
.h-auto {
height: auto !important;
}
.mh-100 {
max-height: 100% !important;
}
.vh-100 {
height: 100vh !important;
}
.min-vh-100 {
min-height: 100vh !important;
}
.flex-fill {
flex: 1 1 auto !important;
}
.flex-row {
flex-direction: row !important;
}
.flex-column {
flex-direction: column !important;
}
.flex-row-reverse {
flex-direction: row-reverse !important;
}
.flex-column-reverse {
flex-direction: column-reverse !important;
}
.flex-grow-0 {
flex-grow: 0 !important;
}
.flex-grow-1 {
flex-grow: 1 !important;
}
.flex-shrink-0 {
flex-shrink: 0 !important;
}
.flex-shrink-1 {
flex-shrink: 1 !important;
}
.flex-wrap {
flex-wrap: wrap !important;
}
.flex-nowrap {
flex-wrap: nowrap !important;
}
.flex-wrap-reverse {
flex-wrap: wrap-reverse !important;
}
.gap-0 {
gap: 0 !important;
}
.gap-1 {
gap: 0.25rem !important;
}
.gap-2 {
gap: 0.5rem !important;
}
.gap-3 {
gap: 1rem !important;
}
.gap-4 {
gap: 1.5rem !important;
}
.gap-5 {
gap: 3rem !important;
}
.justify-content-start {
justify-content: flex-start !important;
}
.justify-content-end {
justify-content: flex-end !important;
}
.justify-content-center {
justify-content: center !important;
}
.justify-content-between {
justify-content: space-between !important;
}
.justify-content-around {
justify-content: space-around !important;
}
.justify-content-evenly {
justify-content: space-evenly !important;
}
.align-items-start {
align-items: flex-start !important;
}
.align-items-end {
align-items: flex-end !important;
}
.align-items-center {
align-items: center !important;
}
.align-items-baseline {
align-items: baseline !important;
}
.align-items-stretch {
align-items: stretch !important;
}
.align-content-start {
align-content: flex-start !important;
}
.align-content-end {
align-content: flex-end !important;
}
.align-content-center {
align-content: center !important;
}
.align-content-between {
align-content: space-between !important;
}
.align-content-around {
align-content: space-around !important;
}
.align-content-stretch {
align-content: stretch !important;
}
.align-self-auto {
align-self: auto !important;
}
.align-self-start {
align-self: flex-start !important;
}
.align-self-end {
align-self: flex-end !important;
}
.align-self-center {
align-self: center !important;
}
.align-self-baseline {
align-self: baseline !important;
}
.align-self-stretch {
align-self: stretch !important;
}
.order-first {
order: -1 !important;
}
.order-0 {
order: 0 !important;
}
.order-1 {
order: 1 !important;
}
.order-2 {
order: 2 !important;
}
.order-3 {
order: 3 !important;
}
.order-4 {
order: 4 !important;
}
.order-5 {
order: 5 !important;
}
.order-last {
order: 6 !important;
}
.m-0 {
margin: 0 !important;
}
.m-1 {
margin: 0.25rem !important;
}
.m-2 {
margin: 0.5rem !important;
}
.m-3 {
margin: 1rem !important;
}
.m-4 {
margin: 1.5rem !important;
}
.m-5 {
margin: 3rem !important;
}
.m-auto {
margin: auto !important;
}
.mx-0 {
margin-right: 0 !important;
margin-left: 0 !important;
}
.mx-1 {
margin-right: 0.25rem !important;
margin-left: 0.25rem !important;
}
.mx-2 {
margin-right: 0.5rem !important;
margin-left: 0.5rem !important;
}
.mx-3 {
margin-right: 1rem !important;
margin-left: 1rem !important;
}
.mx-4 {
margin-right: 1.5rem !important;
margin-left: 1.5rem !important;
}
.mx-5 {
margin-right: 3rem !important;
margin-left: 3rem !important;
}
.mx-auto {
margin-right: auto !important;
margin-left: auto !important;
}
.my-0 {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
.my-1 {
margin-top: 0.25rem !important;
margin-bottom: 0.25rem !important;
}
.my-2 {
margin-top: 0.5rem !important;
margin-bottom: 0.5rem !important;
}
.my-3 {
margin-top: 1rem !important;
margin-bottom: 1rem !important;
}
.my-4 {
margin-top: 1.5rem !important;
margin-bottom: 1.5rem !important;
}
.my-5 {
margin-top: 3rem !important;
margin-bottom: 3rem !important;
}
.my-auto {
margin-top: auto !important;
margin-bottom: auto !important;
}
.mt-0 {
margin-top: 0 !important;
}
.mt-1 {
margin-top: 0.25rem !important;
}
.mt-2 {
margin-top: 0.5rem !important;
}
.mt-3 {
margin-top: 1rem !important;
}
.mt-4 {
margin-top: 1.5rem !important;
}
.mt-5 {
margin-top: 3rem !important;
}
.mt-auto {
margin-top: auto !important;
}
.me-0 {
margin-right: 0 !important;
}
.me-1 {
margin-right: 0.25rem !important;
}
.me-2 {
margin-right: 0.5rem !important;
}
.me-3 {
margin-right: 1rem !important;
}
.me-4 {
margin-right: 1.5rem !important;
}
.me-5 {
margin-right: 3rem !important;
}
.me-auto {
margin-right: auto !important;
}
.mb-0 {
margin-bottom: 0 !important;
}
.mb-1 {
margin-bottom: 0.25rem !important;
}
.mb-2 {
margin-bottom: 0.5rem !important;
}
.mb-3 {
margin-bottom: 1rem !important;
}
.mb-4 {
margin-bottom: 1.5rem !important;
}
.mb-5 {
margin-bottom: 3rem !important;
}
.mb-auto {
margin-bottom: auto !important;
}
.ms-0 {
margin-left: 0 !important;
}
.ms-1 {
margin-left: 0.25rem !important;
}
.ms-2 {
margin-left: 0.5rem !important;
}
.ms-3 {
margin-left: 1rem !important;
}
.ms-4 {
margin-left: 1.5rem !important;
}
.ms-5 {
margin-left: 3rem !important;
}
.ms-auto {
margin-left: auto !important;
}
.p-0 {
padding: 0 !important;
}
.p-1 {
padding: 0.25rem !important;
}
.p-2 {
padding: 0.5rem !important;
}
.p-3 {
padding: 1rem !important;
}
.p-4 {
padding: 1.5rem !important;
}
.p-5 {
padding: 3rem !important;
}
.px-0 {
padding-right: 0 !important;
padding-left: 0 !important;
}
.px-1 {
padding-right: 0.25rem !important;
padding-left: 0.25rem !important;
}
.px-2 {
padding-right: 0.5rem !important;
padding-left: 0.5rem !important;
}
.px-3 {
padding-right: 1rem !important;
padding-left: 1rem !important;
}
.px-4 {
padding-right: 1.5rem !important;
padding-left: 1.5rem !important;
}
.px-5 {
padding-right: 3rem !important;
padding-left: 3rem !important;
}
.py-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
.py-1 {
padding-top: 0.25rem !important;
padding-bottom: 0.25rem !important;
}
.py-2 {
padding-top: 0.5rem !important;
padding-bottom: 0.5rem !important;
}
.py-3 {
padding-top: 1rem !important;
padding-bottom: 1rem !important;
}
.py-4 {
padding-top: 1.5rem !important;
padding-bottom: 1.5rem !important;
}
.py-5 {
padding-top: 3rem !important;
padding-bottom: 3rem !important;
}
.pt-0 {
padding-top: 0 !important;
}
.pt-1 {
padding-top: 0.25rem !important;
}
.pt-2 {
padding-top: 0.5rem !important;
}
.pt-3 {
padding-top: 1rem !important;
}
.pt-4 {
padding-top: 1.5rem !important;
}
.pt-5 {
padding-top: 3rem !important;
}
.pe-0 {
padding-right: 0 !important;
}
.pe-1 {
padding-right: 0.25rem !important;
}
.pe-2 {
padding-right: 0.5rem !important;
}
.pe-3 {
padding-right: 1rem !important;
}
.pe-4 {
padding-right: 1.5rem !important;
}
.pe-5 {
padding-right: 3rem !important;
}
.pb-0 {
padding-bottom: 0 !important;
}
.pb-1 {
padding-bottom: 0.25rem !important;
}
.pb-2 {
padding-bottom: 0.5rem !important;
}
.pb-3 {
padding-bottom: 1rem !important;
}
.pb-4 {
padding-bottom: 1.5rem !important;
}
.pb-5 {
padding-bottom: 3rem !important;
}
.ps-0 {
padding-left: 0 !important;
}
.ps-1 {
padding-left: 0.25rem !important;
}
.ps-2 {
padding-left: 0.5rem !important;
}
.ps-3 {
padding-left: 1rem !important;
}
.ps-4 {
padding-left: 1.5rem !important;
}
.ps-5 {
padding-left: 3rem !important;
}
.font-monospace {
font-family: var(--bs-font-monospace) !important;
}
.fs-1 {
font-size: calc(1.375rem + 1.5vw) !important;
}
.fs-2 {
font-size: calc(1.325rem + 0.9vw) !important;
}
.fs-3 {
font-size: calc(1.3rem + 0.6vw) !important;
}
.fs-4 {
font-size: calc(1.275rem + 0.3vw) !important;
}
.fs-5 {
font-size: 1.25rem !important;
}
.fs-6 {
font-size: 1rem !important;
}
.fst-italic {
font-style: italic !important;
}
.fst-normal {
font-style: normal !important;
}
.fw-light {
font-weight: 300 !important;
}
.fw-lighter {
font-weight: lighter !important;
}
.fw-normal {
font-weight: 400 !important;
}
.fw-bold {
font-weight: 700 !important;
}
.fw-bolder {
font-weight: bolder !important;
}
.lh-1 {
line-height: 1 !important;
}
.lh-sm {
line-height: 1.25 !important;
}
.lh-base {
line-height: 1.5 !important;
}
.lh-lg {
line-height: 2 !important;
}
.text-start {
text-align: left !important;
}
.text-end {
text-align: right !important;
}
.text-center {
text-align: center !important;
}
.text-decoration-none {
text-decoration: none !important;
}
.text-decoration-underline {
text-decoration: underline !important;
}
.text-decoration-line-through {
text-decoration: line-through !important;
}
.text-lowercase {
text-transform: lowercase !important;
}
.text-uppercase {
text-transform: uppercase !important;
}
.text-capitalize {
text-transform: capitalize !important;
}
.text-wrap {
white-space: normal !important;
}
.text-nowrap {
white-space: nowrap !important;
}
/* rtl:begin:remove */
.text-break {
word-wrap: break-word !important;
word-break: break-word !important;
}
/* rtl:end:remove */
.text-primary {
--bs-text-opacity: 1;
color: rgba(var(--bs-primary-rgb), var(--bs-text-opacity)) !important;
}
.text-secondary {
--bs-text-opacity: 1;
color: rgba(var(--bs-secondary-rgb), var(--bs-text-opacity)) !important;
}
.text-success {
--bs-text-opacity: 1;
color: rgba(var(--bs-success-rgb), var(--bs-text-opacity)) !important;
}
.text-info {
--bs-text-opacity: 1;
color: rgba(var(--bs-info-rgb), var(--bs-text-opacity)) !important;
}
.text-warning {
--bs-text-opacity: 1;
color: rgba(var(--bs-warning-rgb), var(--bs-text-opacity)) !important;
}
.text-danger {
--bs-text-opacity: 1;
color: rgba(var(--bs-danger-rgb), var(--bs-text-opacity)) !important;
}
.text-light {
--bs-text-opacity: 1;
color: rgba(var(--bs-light-rgb), var(--bs-text-opacity)) !important;
}
.text-dark {
--bs-text-opacity: 1;
color: rgba(var(--bs-dark-rgb), var(--bs-text-opacity)) !important;
}
.text-black {
--bs-text-opacity: 1;
color: rgba(var(--bs-black-rgb), var(--bs-text-opacity)) !important;
}
.text-white {
--bs-text-opacity: 1;
color: rgba(var(--bs-white-rgb), var(--bs-text-opacity)) !important;
}
.text-body {
--bs-text-opacity: 1;
color: rgba(var(--bs-body-color-rgb), var(--bs-text-opacity)) !important;
}
.text-muted {
--bs-text-opacity: 1;
color: #6c757d !important;
}
.text-black-50 {
--bs-text-opacity: 1;
color: rgba(0, 0, 0, 0.5) !important;
}
.text-white-50 {
--bs-text-opacity: 1;
color: rgba(255, 255, 255, 0.5) !important;
}
.text-reset {
--bs-text-opacity: 1;
color: inherit !important;
}
.text-opacity-25 {
--bs-text-opacity: 0.25;
}
.text-opacity-50 {
--bs-text-opacity: 0.5;
}
.text-opacity-75 {
--bs-text-opacity: 0.75;
}
.text-opacity-100 {
--bs-text-opacity: 1;
}
.bg-primary {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-primary-rgb), var(--bs-bg-opacity)) !important;
}
.bg-secondary {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-secondary-rgb), var(--bs-bg-opacity)) !important;
}
.bg-success {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-success-rgb), var(--bs-bg-opacity)) !important;
}
.bg-info {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-info-rgb), var(--bs-bg-opacity)) !important;
}
.bg-warning {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-warning-rgb), var(--bs-bg-opacity)) !important;
}
.bg-danger {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-danger-rgb), var(--bs-bg-opacity)) !important;
}
.bg-light {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-light-rgb), var(--bs-bg-opacity)) !important;
}
.bg-dark {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-dark-rgb), var(--bs-bg-opacity)) !important;
}
.bg-black {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-black-rgb), var(--bs-bg-opacity)) !important;
}
.bg-white {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-white-rgb), var(--bs-bg-opacity)) !important;
}
.bg-body {
--bs-bg-opacity: 1;
background-color: rgba(var(--bs-body-bg-rgb), var(--bs-bg-opacity)) !important;
}
.bg-transparent {
--bs-bg-opacity: 1;
background-color: transparent !important;
}
.bg-opacity-10 {
--bs-bg-opacity: 0.1;
}
.bg-opacity-25 {
--bs-bg-opacity: 0.25;
}
.bg-opacity-50 {
--bs-bg-opacity: 0.5;
}
.bg-opacity-75 {
--bs-bg-opacity: 0.75;
}
.bg-opacity-100 {
--bs-bg-opacity: 1;
}
.bg-gradient {
background-image: var(--bs-gradient) !important;
}
.user-select-all {
-webkit-user-select: all !important;
-moz-user-select: all !important;
user-select: all !important;
}
.user-select-auto {
-webkit-user-select: auto !important;
-moz-user-select: auto !important;
-ms-user-select: auto !important;
user-select: auto !important;
}
.user-select-none {
-webkit-user-select: none !important;
-moz-user-select: none !important;
-ms-user-select: none !important;
user-select: none !important;
}
.pe-none {
pointer-events: none !important;
}
.pe-auto {
pointer-events: auto !important;
}
.rounded {
border-radius: 0.25rem !important;
}
.rounded-0 {
border-radius: 0 !important;
}
.rounded-1 {
border-radius: 0.2rem !important;
}
.rounded-2 {
border-radius: 0.25rem !important;
}
.rounded-3 {
border-radius: 0.3rem !important;
}
.rounded-circle {
border-radius: 50% !important;
}
.rounded-pill {
border-radius: 50rem !important;
}
.rounded-top {
border-top-left-radius: 0.25rem !important;
border-top-right-radius: 0.25rem !important;
}
.rounded-end {
border-top-right-radius: 0.25rem !important;
border-bottom-right-radius: 0.25rem !important;
}
.rounded-bottom {
border-bottom-right-radius: 0.25rem !important;
border-bottom-left-radius: 0.25rem !important;
}
.rounded-start {
border-bottom-left-radius: 0.25rem !important;
border-top-left-radius: 0.25rem !important;
}
.visible {
visibility: visible !important;
}
.invisible {
visibility: hidden !important;
}
@media (min-width: 576px) {
.float-sm-start {
float: left !important;
}
.float-sm-end {
float: right !important;
}
.float-sm-none {
float: none !important;
}
.d-sm-inline {
display: inline !important;
}
.d-sm-inline-block {
display: inline-block !important;
}
.d-sm-block {
display: block !important;
}
.d-sm-grid {
display: grid !important;
}
.d-sm-table {
display: table !important;
}
.d-sm-table-row {
display: table-row !important;
}
.d-sm-table-cell {
display: table-cell !important;
}
.d-sm-flex {
display: flex !important;
}
.d-sm-inline-flex {
display: inline-flex !important;
}
.d-sm-none {
display: none !important;
}
.flex-sm-fill {
flex: 1 1 auto !important;
}
.flex-sm-row {
flex-direction: row !important;
}
.flex-sm-column {
flex-direction: column !important;
}
.flex-sm-row-reverse {
flex-direction: row-reverse !important;
}
.flex-sm-column-reverse {
flex-direction: column-reverse !important;
}
.flex-sm-grow-0 {
flex-grow: 0 !important;
}
.flex-sm-grow-1 {
flex-grow: 1 !important;
}
.flex-sm-shrink-0 {
flex-shrink: 0 !important;
}
.flex-sm-shrink-1 {
flex-shrink: 1 !important;
}
.flex-sm-wrap {
flex-wrap: wrap !important;
}
.flex-sm-nowrap {
flex-wrap: nowrap !important;
}
.flex-sm-wrap-reverse {
flex-wrap: wrap-reverse !important;
}
.gap-sm-0 {
gap: 0 !important;
}
.gap-sm-1 {
gap: 0.25rem !important;
}
.gap-sm-2 {
gap: 0.5rem !important;
}
.gap-sm-3 {
gap: 1rem !important;
}
.gap-sm-4 {
gap: 1.5rem !important;
}
.gap-sm-5 {
gap: 3rem !important;
}
.justify-content-sm-start {
justify-content: flex-start !important;
}
.justify-content-sm-end {
justify-content: flex-end !important;
}
.justify-content-sm-center {
justify-content: center !important;
}
.justify-content-sm-between {
justify-content: space-between !important;
}
.justify-content-sm-around {
justify-content: space-around !important;
}
.justify-content-sm-evenly {
justify-content: space-evenly !important;
}
.align-items-sm-start {
align-items: flex-start !important;
}
.align-items-sm-end {
align-items: flex-end !important;
}
.align-items-sm-center {
align-items: center !important;
}
.align-items-sm-baseline {
align-items: baseline !important;
}
.align-items-sm-stretch {
align-items: stretch !important;
}
.align-content-sm-start {
align-content: flex-start !important;
}
.align-content-sm-end {
align-content: flex-end !important;
}
.align-content-sm-center {
align-content: center !important;
}
.align-content-sm-between {
align-content: space-between !important;
}
.align-content-sm-around {
align-content: space-around !important;
}
.align-content-sm-stretch {
align-content: stretch !important;
}
.align-self-sm-auto {
align-self: auto !important;
}
.align-self-sm-start {
align-self: flex-start !important;
}
.align-self-sm-end {
align-self: flex-end !important;
}
.align-self-sm-center {
align-self: center !important;
}
.align-self-sm-baseline {
align-self: baseline !important;
}
.align-self-sm-stretch {
align-self: stretch !important;
}
.order-sm-first {
order: -1 !important;
}
.order-sm-0 {
order: 0 !important;
}
.order-sm-1 {
order: 1 !important;
}
.order-sm-2 {
order: 2 !important;
}
.order-sm-3 {
order: 3 !important;
}
.order-sm-4 {
order: 4 !important;
}
.order-sm-5 {
order: 5 !important;
}
.order-sm-last {
order: 6 !important;
}
.m-sm-0 {
margin: 0 !important;
}
.m-sm-1 {
margin: 0.25rem !important;
}
.m-sm-2 {
margin: 0.5rem !important;
}
.m-sm-3 {
margin: 1rem !important;
}
.m-sm-4 {
margin: 1.5rem !important;
}
.m-sm-5 {
margin: 3rem !important;
}
.m-sm-auto {
margin: auto !important;
}
.mx-sm-0 {
margin-right: 0 !important;
margin-left: 0 !important;
}
.mx-sm-1 {
margin-right: 0.25rem !important;
margin-left: 0.25rem !important;
}
.mx-sm-2 {
margin-right: 0.5rem !important;
margin-left: 0.5rem !important;
}
.mx-sm-3 {
margin-right: 1rem !important;
margin-left: 1rem !important;
}
.mx-sm-4 {
margin-right: 1.5rem !important;
margin-left: 1.5rem !important;
}
.mx-sm-5 {
margin-right: 3rem !important;
margin-left: 3rem !important;
}
.mx-sm-auto {
margin-right: auto !important;
margin-left: auto !important;
}
.my-sm-0 {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
.my-sm-1 {
margin-top: 0.25rem !important;
margin-bottom: 0.25rem !important;
}
.my-sm-2 {
margin-top: 0.5rem !important;
margin-bottom: 0.5rem !important;
}
.my-sm-3 {
margin-top: 1rem !important;
margin-bottom: 1rem !important;
}
.my-sm-4 {
margin-top: 1.5rem !important;
margin-bottom: 1.5rem !important;
}
.my-sm-5 {
margin-top: 3rem !important;
margin-bottom: 3rem !important;
}
.my-sm-auto {
margin-top: auto !important;
margin-bottom: auto !important;
}
.mt-sm-0 {
margin-top: 0 !important;
}
.mt-sm-1 {
margin-top: 0.25rem !important;
}
.mt-sm-2 {
margin-top: 0.5rem !important;
}
.mt-sm-3 {
margin-top: 1rem !important;
}
.mt-sm-4 {
margin-top: 1.5rem !important;
}
.mt-sm-5 {
margin-top: 3rem !important;
}
.mt-sm-auto {
margin-top: auto !important;
}
.me-sm-0 {
margin-right: 0 !important;
}
.me-sm-1 {
margin-right: 0.25rem !important;
}
.me-sm-2 {
margin-right: 0.5rem !important;
}
.me-sm-3 {
margin-right: 1rem !important;
}
.me-sm-4 {
margin-right: 1.5rem !important;
}
.me-sm-5 {
margin-right: 3rem !important;
}
.me-sm-auto {
margin-right: auto !important;
}
.mb-sm-0 {
margin-bottom: 0 !important;
}
.mb-sm-1 {
margin-bottom: 0.25rem !important;
}
.mb-sm-2 {
margin-bottom: 0.5rem !important;
}
.mb-sm-3 {
margin-bottom: 1rem !important;
}
.mb-sm-4 {
margin-bottom: 1.5rem !important;
}
.mb-sm-5 {
margin-bottom: 3rem !important;
}
.mb-sm-auto {
margin-bottom: auto !important;
}
.ms-sm-0 {
margin-left: 0 !important;
}
.ms-sm-1 {
margin-left: 0.25rem !important;
}
.ms-sm-2 {
margin-left: 0.5rem !important;
}
.ms-sm-3 {
margin-left: 1rem !important;
}
.ms-sm-4 {
margin-left: 1.5rem !important;
}
.ms-sm-5 {
margin-left: 3rem !important;
}
.ms-sm-auto {
margin-left: auto !important;
}
.p-sm-0 {
padding: 0 !important;
}
.p-sm-1 {
padding: 0.25rem !important;
}
.p-sm-2 {
padding: 0.5rem !important;
}
.p-sm-3 {
padding: 1rem !important;
}
.p-sm-4 {
padding: 1.5rem !important;
}
.p-sm-5 {
padding: 3rem !important;
}
.px-sm-0 {
padding-right: 0 !important;
padding-left: 0 !important;
}
.px-sm-1 {
padding-right: 0.25rem !important;
padding-left: 0.25rem !important;
}
.px-sm-2 {
padding-right: 0.5rem !important;
padding-left: 0.5rem !important;
}
.px-sm-3 {
padding-right: 1rem !important;
padding-left: 1rem !important;
}
.px-sm-4 {
padding-right: 1.5rem !important;
padding-left: 1.5rem !important;
}
.px-sm-5 {
padding-right: 3rem !important;
padding-left: 3rem !important;
}
.py-sm-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
.py-sm-1 {
padding-top: 0.25rem !important;
padding-bottom: 0.25rem !important;
}
.py-sm-2 {
padding-top: 0.5rem !important;
padding-bottom: 0.5rem !important;
}
.py-sm-3 {
padding-top: 1rem !important;
padding-bottom: 1rem !important;
}
.py-sm-4 {
padding-top: 1.5rem !important;
padding-bottom: 1.5rem !important;
}
.py-sm-5 {
padding-top: 3rem !important;
padding-bottom: 3rem !important;
}
.pt-sm-0 {
padding-top: 0 !important;
}
.pt-sm-1 {
padding-top: 0.25rem !important;
}
.pt-sm-2 {
padding-top: 0.5rem !important;
}
.pt-sm-3 {
padding-top: 1rem !important;
}
.pt-sm-4 {
padding-top: 1.5rem !important;
}
.pt-sm-5 {
padding-top: 3rem !important;
}
.pe-sm-0 {
padding-right: 0 !important;
}
.pe-sm-1 {
padding-right: 0.25rem !important;
}
.pe-sm-2 {
padding-right: 0.5rem !important;
}
.pe-sm-3 {
padding-right: 1rem !important;
}
.pe-sm-4 {
padding-right: 1.5rem !important;
}
.pe-sm-5 {
padding-right: 3rem !important;
}
.pb-sm-0 {
padding-bottom: 0 !important;
}
.pb-sm-1 {
padding-bottom: 0.25rem !important;
}
.pb-sm-2 {
padding-bottom: 0.5rem !important;
}
.pb-sm-3 {
padding-bottom: 1rem !important;
}
.pb-sm-4 {
padding-bottom: 1.5rem !important;
}
.pb-sm-5 {
padding-bottom: 3rem !important;
}
.ps-sm-0 {
padding-left: 0 !important;
}
.ps-sm-1 {
padding-left: 0.25rem !important;
}
.ps-sm-2 {
padding-left: 0.5rem !important;
}
.ps-sm-3 {
padding-left: 1rem !important;
}
.ps-sm-4 {
padding-left: 1.5rem !important;
}
.ps-sm-5 {
padding-left: 3rem !important;
}
.text-sm-start {
text-align: left !important;
}
.text-sm-end {
text-align: right !important;
}
.text-sm-center {
text-align: center !important;
}
}
@media (min-width: 768px) {
.float-md-start {
float: left !important;
}
.float-md-end {
float: right !important;
}
.float-md-none {
float: none !important;
}
.d-md-inline {
display: inline !important;
}
.d-md-inline-block {
display: inline-block !important;
}
.d-md-block {
display: block !important;
}
.d-md-grid {
display: grid !important;
}
.d-md-table {
display: table !important;
}
.d-md-table-row {
display: table-row !important;
}
.d-md-table-cell {
display: table-cell !important;
}
.d-md-flex {
display: flex !important;
}
.d-md-inline-flex {
display: inline-flex !important;
}
.d-md-none {
display: none !important;
}
.flex-md-fill {
flex: 1 1 auto !important;
}
.flex-md-row {
flex-direction: row !important;
}
.flex-md-column {
flex-direction: column !important;
}
.flex-md-row-reverse {
flex-direction: row-reverse !important;
}
.flex-md-column-reverse {
flex-direction: column-reverse !important;
}
.flex-md-grow-0 {
flex-grow: 0 !important;
}
.flex-md-grow-1 {
flex-grow: 1 !important;
}
.flex-md-shrink-0 {
flex-shrink: 0 !important;
}
.flex-md-shrink-1 {
flex-shrink: 1 !important;
}
.flex-md-wrap {
flex-wrap: wrap !important;
}
.flex-md-nowrap {
flex-wrap: nowrap !important;
}
.flex-md-wrap-reverse {
flex-wrap: wrap-reverse !important;
}
.gap-md-0 {
gap: 0 !important;
}
.gap-md-1 {
gap: 0.25rem !important;
}
.gap-md-2 {
gap: 0.5rem !important;
}
.gap-md-3 {
gap: 1rem !important;
}
.gap-md-4 {
gap: 1.5rem !important;
}
.gap-md-5 {
gap: 3rem !important;
}
.justify-content-md-start {
justify-content: flex-start !important;
}
.justify-content-md-end {
justify-content: flex-end !important;
}
.justify-content-md-center {
justify-content: center !important;
}
.justify-content-md-between {
justify-content: space-between !important;
}
.justify-content-md-around {
justify-content: space-around !important;
}
.justify-content-md-evenly {
justify-content: space-evenly !important;
}
.align-items-md-start {
align-items: flex-start !important;
}
.align-items-md-end {
align-items: flex-end !important;
}
.align-items-md-center {
align-items: center !important;
}
.align-items-md-baseline {
align-items: baseline !important;
}
.align-items-md-stretch {
align-items: stretch !important;
}
.align-content-md-start {
align-content: flex-start !important;
}
.align-content-md-end {
align-content: flex-end !important;
}
.align-content-md-center {
align-content: center !important;
}
.align-content-md-between {
align-content: space-between !important;
}
.align-content-md-around {
align-content: space-around !important;
}
.align-content-md-stretch {
align-content: stretch !important;
}
.align-self-md-auto {
align-self: auto !important;
}
.align-self-md-start {
align-self: flex-start !important;
}
.align-self-md-end {
align-self: flex-end !important;
}
.align-self-md-center {
align-self: center !important;
}
.align-self-md-baseline {
align-self: baseline !important;
}
.align-self-md-stretch {
align-self: stretch !important;
}
.order-md-first {
order: -1 !important;
}
.order-md-0 {
order: 0 !important;
}
.order-md-1 {
order: 1 !important;
}
.order-md-2 {
order: 2 !important;
}
.order-md-3 {
order: 3 !important;
}
.order-md-4 {
order: 4 !important;
}
.order-md-5 {
order: 5 !important;
}
.order-md-last {
order: 6 !important;
}
.m-md-0 {
margin: 0 !important;
}
.m-md-1 {
margin: 0.25rem !important;
}
.m-md-2 {
margin: 0.5rem !important;
}
.m-md-3 {
margin: 1rem !important;
}
.m-md-4 {
margin: 1.5rem !important;
}
.m-md-5 {
margin: 3rem !important;
}
.m-md-auto {
margin: auto !important;
}
.mx-md-0 {
margin-right: 0 !important;
margin-left: 0 !important;
}
.mx-md-1 {
margin-right: 0.25rem !important;
margin-left: 0.25rem !important;
}
.mx-md-2 {
margin-right: 0.5rem !important;
margin-left: 0.5rem !important;
}
.mx-md-3 {
margin-right: 1rem !important;
margin-left: 1rem !important;
}
.mx-md-4 {
margin-right: 1.5rem !important;
margin-left: 1.5rem !important;
}
.mx-md-5 {
margin-right: 3rem !important;
margin-left: 3rem !important;
}
.mx-md-auto {
margin-right: auto !important;
margin-left: auto !important;
}
.my-md-0 {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
.my-md-1 {
margin-top: 0.25rem !important;
margin-bottom: 0.25rem !important;
}
.my-md-2 {
margin-top: 0.5rem !important;
margin-bottom: 0.5rem !important;
}
.my-md-3 {
margin-top: 1rem !important;
margin-bottom: 1rem !important;
}
.my-md-4 {
margin-top: 1.5rem !important;
margin-bottom: 1.5rem !important;
}
.my-md-5 {
margin-top: 3rem !important;
margin-bottom: 3rem !important;
}
.my-md-auto {
margin-top: auto !important;
margin-bottom: auto !important;
}
.mt-md-0 {
margin-top: 0 !important;
}
.mt-md-1 {
margin-top: 0.25rem !important;
}
.mt-md-2 {
margin-top: 0.5rem !important;
}
.mt-md-3 {
margin-top: 1rem !important;
}
.mt-md-4 {
margin-top: 1.5rem !important;
}
.mt-md-5 {
margin-top: 3rem !important;
}
.mt-md-auto {
margin-top: auto !important;
}
.me-md-0 {
margin-right: 0 !important;
}
.me-md-1 {
margin-right: 0.25rem !important;
}
.me-md-2 {
margin-right: 0.5rem !important;
}
.me-md-3 {
margin-right: 1rem !important;
}
.me-md-4 {
margin-right: 1.5rem !important;
}
.me-md-5 {
margin-right: 3rem !important;
}
.me-md-auto {
margin-right: auto !important;
}
.mb-md-0 {
margin-bottom: 0 !important;
}
.mb-md-1 {
margin-bottom: 0.25rem !important;
}
.mb-md-2 {
margin-bottom: 0.5rem !important;
}
.mb-md-3 {
margin-bottom: 1rem !important;
}
.mb-md-4 {
margin-bottom: 1.5rem !important;
}
.mb-md-5 {
margin-bottom: 3rem !important;
}
.mb-md-auto {
margin-bottom: auto !important;
}
.ms-md-0 {
margin-left: 0 !important;
}
.ms-md-1 {
margin-left: 0.25rem !important;
}
.ms-md-2 {
margin-left: 0.5rem !important;
}
.ms-md-3 {
margin-left: 1rem !important;
}
.ms-md-4 {
margin-left: 1.5rem !important;
}
.ms-md-5 {
margin-left: 3rem !important;
}
.ms-md-auto {
margin-left: auto !important;
}
.p-md-0 {
padding: 0 !important;
}
.p-md-1 {
padding: 0.25rem !important;
}
.p-md-2 {
padding: 0.5rem !important;
}
.p-md-3 {
padding: 1rem !important;
}
.p-md-4 {
padding: 1.5rem !important;
}
.p-md-5 {
padding: 3rem !important;
}
.px-md-0 {
padding-right: 0 !important;
padding-left: 0 !important;
}
.px-md-1 {
padding-right: 0.25rem !important;
padding-left: 0.25rem !important;
}
.px-md-2 {
padding-right: 0.5rem !important;
padding-left: 0.5rem !important;
}
.px-md-3 {
padding-right: 1rem !important;
padding-left: 1rem !important;
}
.px-md-4 {
padding-right: 1.5rem !important;
padding-left: 1.5rem !important;
}
.px-md-5 {
padding-right: 3rem !important;
padding-left: 3rem !important;
}
.py-md-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
.py-md-1 {
padding-top: 0.25rem !important;
padding-bottom: 0.25rem !important;
}
.py-md-2 {
padding-top: 0.5rem !important;
padding-bottom: 0.5rem !important;
}
.py-md-3 {
padding-top: 1rem !important;
padding-bottom: 1rem !important;
}
.py-md-4 {
padding-top: 1.5rem !important;
padding-bottom: 1.5rem !important;
}
.py-md-5 {
padding-top: 3rem !important;
padding-bottom: 3rem !important;
}
.pt-md-0 {
padding-top: 0 !important;
}
.pt-md-1 {
padding-top: 0.25rem !important;
}
.pt-md-2 {
padding-top: 0.5rem !important;
}
.pt-md-3 {
padding-top: 1rem !important;
}
.pt-md-4 {
padding-top: 1.5rem !important;
}
.pt-md-5 {
padding-top: 3rem !important;
}
.pe-md-0 {
padding-right: 0 !important;
}
.pe-md-1 {
padding-right: 0.25rem !important;
}
.pe-md-2 {
padding-right: 0.5rem !important;
}
.pe-md-3 {
padding-right: 1rem !important;
}
.pe-md-4 {
padding-right: 1.5rem !important;
}
.pe-md-5 {
padding-right: 3rem !important;
}
.pb-md-0 {
padding-bottom: 0 !important;
}
.pb-md-1 {
padding-bottom: 0.25rem !important;
}
.pb-md-2 {
padding-bottom: 0.5rem !important;
}
.pb-md-3 {
padding-bottom: 1rem !important;
}
.pb-md-4 {
padding-bottom: 1.5rem !important;
}
.pb-md-5 {
padding-bottom: 3rem !important;
}
.ps-md-0 {
padding-left: 0 !important;
}
.ps-md-1 {
padding-left: 0.25rem !important;
}
.ps-md-2 {
padding-left: 0.5rem !important;
}
.ps-md-3 {
padding-left: 1rem !important;
}
.ps-md-4 {
padding-left: 1.5rem !important;
}
.ps-md-5 {
padding-left: 3rem !important;
}
.text-md-start {
text-align: left !important;
}
.text-md-end {
text-align: right !important;
}
.text-md-center {
text-align: center !important;
}
}
@media (min-width: 992px) {
.float-lg-start {
float: left !important;
}
.float-lg-end {
float: right !important;
}
.float-lg-none {
float: none !important;
}
.d-lg-inline {
display: inline !important;
}
.d-lg-inline-block {
display: inline-block !important;
}
.d-lg-block {
display: block !important;
}
.d-lg-grid {
display: grid !important;
}
.d-lg-table {
display: table !important;
}
.d-lg-table-row {
display: table-row !important;
}
.d-lg-table-cell {
display: table-cell !important;
}
.d-lg-flex {
display: flex !important;
}
.d-lg-inline-flex {
display: inline-flex !important;
}
.d-lg-none {
display: none !important;
}
.flex-lg-fill {
flex: 1 1 auto !important;
}
.flex-lg-row {
flex-direction: row !important;
}
.flex-lg-column {
flex-direction: column !important;
}
.flex-lg-row-reverse {
flex-direction: row-reverse !important;
}
.flex-lg-column-reverse {
flex-direction: column-reverse !important;
}
.flex-lg-grow-0 {
flex-grow: 0 !important;
}
.flex-lg-grow-1 {
flex-grow: 1 !important;
}
.flex-lg-shrink-0 {
flex-shrink: 0 !important;
}
.flex-lg-shrink-1 {
flex-shrink: 1 !important;
}
.flex-lg-wrap {
flex-wrap: wrap !important;
}
.flex-lg-nowrap {
flex-wrap: nowrap !important;
}
.flex-lg-wrap-reverse {
flex-wrap: wrap-reverse !important;
}
.gap-lg-0 {
gap: 0 !important;
}
.gap-lg-1 {
gap: 0.25rem !important;
}
.gap-lg-2 {
gap: 0.5rem !important;
}
.gap-lg-3 {
gap: 1rem !important;
}
.gap-lg-4 {
gap: 1.5rem !important;
}
.gap-lg-5 {
gap: 3rem !important;
}
.justify-content-lg-start {
justify-content: flex-start !important;
}
.justify-content-lg-end {
justify-content: flex-end !important;
}
.justify-content-lg-center {
justify-content: center !important;
}
.justify-content-lg-between {
justify-content: space-between !important;
}
.justify-content-lg-around {
justify-content: space-around !important;
}
.justify-content-lg-evenly {
justify-content: space-evenly !important;
}
.align-items-lg-start {
align-items: flex-start !important;
}
.align-items-lg-end {
align-items: flex-end !important;
}
.align-items-lg-center {
align-items: center !important;
}
.align-items-lg-baseline {
align-items: baseline !important;
}
.align-items-lg-stretch {
align-items: stretch !important;
}
.align-content-lg-start {
align-content: flex-start !important;
}
.align-content-lg-end {
align-content: flex-end !important;
}
.align-content-lg-center {
align-content: center !important;
}
.align-content-lg-between {
align-content: space-between !important;
}
.align-content-lg-around {
align-content: space-around !important;
}
.align-content-lg-stretch {
align-content: stretch !important;
}
.align-self-lg-auto {
align-self: auto !important;
}
.align-self-lg-start {
align-self: flex-start !important;
}
.align-self-lg-end {
align-self: flex-end !important;
}
.align-self-lg-center {
align-self: center !important;
}
.align-self-lg-baseline {
align-self: baseline !important;
}
.align-self-lg-stretch {
align-self: stretch !important;
}
.order-lg-first {
order: -1 !important;
}
.order-lg-0 {
order: 0 !important;
}
.order-lg-1 {
order: 1 !important;
}
.order-lg-2 {
order: 2 !important;
}
.order-lg-3 {
order: 3 !important;
}
.order-lg-4 {
order: 4 !important;
}
.order-lg-5 {
order: 5 !important;
}
.order-lg-last {
order: 6 !important;
}
.m-lg-0 {
margin: 0 !important;
}
.m-lg-1 {
margin: 0.25rem !important;
}
.m-lg-2 {
margin: 0.5rem !important;
}
.m-lg-3 {
margin: 1rem !important;
}
.m-lg-4 {
margin: 1.5rem !important;
}
.m-lg-5 {
margin: 3rem !important;
}
.m-lg-auto {
margin: auto !important;
}
.mx-lg-0 {
margin-right: 0 !important;
margin-left: 0 !important;
}
.mx-lg-1 {
margin-right: 0.25rem !important;
margin-left: 0.25rem !important;
}
.mx-lg-2 {
margin-right: 0.5rem !important;
margin-left: 0.5rem !important;
}
.mx-lg-3 {
margin-right: 1rem !important;
margin-left: 1rem !important;
}
.mx-lg-4 {
margin-right: 1.5rem !important;
margin-left: 1.5rem !important;
}
.mx-lg-5 {
margin-right: 3rem !important;
margin-left: 3rem !important;
}
.mx-lg-auto {
margin-right: auto !important;
margin-left: auto !important;
}
.my-lg-0 {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
.my-lg-1 {
margin-top: 0.25rem !important;
margin-bottom: 0.25rem !important;
}
.my-lg-2 {
margin-top: 0.5rem !important;
margin-bottom: 0.5rem !important;
}
.my-lg-3 {
margin-top: 1rem !important;
margin-bottom: 1rem !important;
}
.my-lg-4 {
margin-top: 1.5rem !important;
margin-bottom: 1.5rem !important;
}
.my-lg-5 {
margin-top: 3rem !important;
margin-bottom: 3rem !important;
}
.my-lg-auto {
margin-top: auto !important;
margin-bottom: auto !important;
}
.mt-lg-0 {
margin-top: 0 !important;
}
.mt-lg-1 {
margin-top: 0.25rem !important;
}
.mt-lg-2 {
margin-top: 0.5rem !important;
}
.mt-lg-3 {
margin-top: 1rem !important;
}
.mt-lg-4 {
margin-top: 1.5rem !important;
}
.mt-lg-5 {
margin-top: 3rem !important;
}
.mt-lg-auto {
margin-top: auto !important;
}
.me-lg-0 {
margin-right: 0 !important;
}
.me-lg-1 {
margin-right: 0.25rem !important;
}
.me-lg-2 {
margin-right: 0.5rem !important;
}
.me-lg-3 {
margin-right: 1rem !important;
}
.me-lg-4 {
margin-right: 1.5rem !important;
}
.me-lg-5 {
margin-right: 3rem !important;
}
.me-lg-auto {
margin-right: auto !important;
}
.mb-lg-0 {
margin-bottom: 0 !important;
}
.mb-lg-1 {
margin-bottom: 0.25rem !important;
}
.mb-lg-2 {
margin-bottom: 0.5rem !important;
}
.mb-lg-3 {
margin-bottom: 1rem !important;
}
.mb-lg-4 {
margin-bottom: 1.5rem !important;
}
.mb-lg-5 {
margin-bottom: 3rem !important;
}
.mb-lg-auto {
margin-bottom: auto !important;
}
.ms-lg-0 {
margin-left: 0 !important;
}
.ms-lg-1 {
margin-left: 0.25rem !important;
}
.ms-lg-2 {
margin-left: 0.5rem !important;
}
.ms-lg-3 {
margin-left: 1rem !important;
}
.ms-lg-4 {
margin-left: 1.5rem !important;
}
.ms-lg-5 {
margin-left: 3rem !important;
}
.ms-lg-auto {
margin-left: auto !important;
}
.p-lg-0 {
padding: 0 !important;
}
.p-lg-1 {
padding: 0.25rem !important;
}
.p-lg-2 {
padding: 0.5rem !important;
}
.p-lg-3 {
padding: 1rem !important;
}
.p-lg-4 {
padding: 1.5rem !important;
}
.p-lg-5 {
padding: 3rem !important;
}
.px-lg-0 {
padding-right: 0 !important;
padding-left: 0 !important;
}
.px-lg-1 {
padding-right: 0.25rem !important;
padding-left: 0.25rem !important;
}
.px-lg-2 {
padding-right: 0.5rem !important;
padding-left: 0.5rem !important;
}
.px-lg-3 {
padding-right: 1rem !important;
padding-left: 1rem !important;
}
.px-lg-4 {
padding-right: 1.5rem !important;
padding-left: 1.5rem !important;
}
.px-lg-5 {
padding-right: 3rem !important;
padding-left: 3rem !important;
}
.py-lg-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
.py-lg-1 {
padding-top: 0.25rem !important;
padding-bottom: 0.25rem !important;
}
.py-lg-2 {
padding-top: 0.5rem !important;
padding-bottom: 0.5rem !important;
}
.py-lg-3 {
padding-top: 1rem !important;
padding-bottom: 1rem !important;
}
.py-lg-4 {
padding-top: 1.5rem !important;
padding-bottom: 1.5rem !important;
}
.py-lg-5 {
padding-top: 3rem !important;
padding-bottom: 3rem !important;
}
.pt-lg-0 {
padding-top: 0 !important;
}
.pt-lg-1 {
padding-top: 0.25rem !important;
}
.pt-lg-2 {
padding-top: 0.5rem !important;
}
.pt-lg-3 {
padding-top: 1rem !important;
}
.pt-lg-4 {
padding-top: 1.5rem !important;
}
.pt-lg-5 {
padding-top: 3rem !important;
}
.pe-lg-0 {
padding-right: 0 !important;
}
.pe-lg-1 {
padding-right: 0.25rem !important;
}
.pe-lg-2 {
padding-right: 0.5rem !important;
}
.pe-lg-3 {
padding-right: 1rem !important;
}
.pe-lg-4 {
padding-right: 1.5rem !important;
}
.pe-lg-5 {
padding-right: 3rem !important;
}
.pb-lg-0 {
padding-bottom: 0 !important;
}
.pb-lg-1 {
padding-bottom: 0.25rem !important;
}
.pb-lg-2 {
padding-bottom: 0.5rem !important;
}
.pb-lg-3 {
padding-bottom: 1rem !important;
}
.pb-lg-4 {
padding-bottom: 1.5rem !important;
}
.pb-lg-5 {
padding-bottom: 3rem !important;
}
.ps-lg-0 {
padding-left: 0 !important;
}
.ps-lg-1 {
padding-left: 0.25rem !important;
}
.ps-lg-2 {
padding-left: 0.5rem !important;
}
.ps-lg-3 {
padding-left: 1rem !important;
}
.ps-lg-4 {
padding-left: 1.5rem !important;
}
.ps-lg-5 {
padding-left: 3rem !important;
}
.text-lg-start {
text-align: left !important;
}
.text-lg-end {
text-align: right !important;
}
.text-lg-center {
text-align: center !important;
}
}
@media (min-width: 1200px) {
.float-xl-start {
float: left !important;
}
.float-xl-end {
float: right !important;
}
.float-xl-none {
float: none !important;
}
.d-xl-inline {
display: inline !important;
}
.d-xl-inline-block {
display: inline-block !important;
}
.d-xl-block {
display: block !important;
}
.d-xl-grid {
display: grid !important;
}
.d-xl-table {
display: table !important;
}
.d-xl-table-row {
display: table-row !important;
}
.d-xl-table-cell {
display: table-cell !important;
}
.d-xl-flex {
display: flex !important;
}
.d-xl-inline-flex {
display: inline-flex !important;
}
.d-xl-none {
display: none !important;
}
.flex-xl-fill {
flex: 1 1 auto !important;
}
.flex-xl-row {
flex-direction: row !important;
}
.flex-xl-column {
flex-direction: column !important;
}
.flex-xl-row-reverse {
flex-direction: row-reverse !important;
}
.flex-xl-column-reverse {
flex-direction: column-reverse !important;
}
.flex-xl-grow-0 {
flex-grow: 0 !important;
}
.flex-xl-grow-1 {
flex-grow: 1 !important;
}
.flex-xl-shrink-0 {
flex-shrink: 0 !important;
}
.flex-xl-shrink-1 {
flex-shrink: 1 !important;
}
.flex-xl-wrap {
flex-wrap: wrap !important;
}
.flex-xl-nowrap {
flex-wrap: nowrap !important;
}
.flex-xl-wrap-reverse {
flex-wrap: wrap-reverse !important;
}
.gap-xl-0 {
gap: 0 !important;
}
.gap-xl-1 {
gap: 0.25rem !important;
}
.gap-xl-2 {
gap: 0.5rem !important;
}
.gap-xl-3 {
gap: 1rem !important;
}
.gap-xl-4 {
gap: 1.5rem !important;
}
.gap-xl-5 {
gap: 3rem !important;
}
.justify-content-xl-start {
justify-content: flex-start !important;
}
.justify-content-xl-end {
justify-content: flex-end !important;
}
.justify-content-xl-center {
justify-content: center !important;
}
.justify-content-xl-between {
justify-content: space-between !important;
}
.justify-content-xl-around {
justify-content: space-around !important;
}
.justify-content-xl-evenly {
justify-content: space-evenly !important;
}
.align-items-xl-start {
align-items: flex-start !important;
}
.align-items-xl-end {
align-items: flex-end !important;
}
.align-items-xl-center {
align-items: center !important;
}
.align-items-xl-baseline {
align-items: baseline !important;
}
.align-items-xl-stretch {
align-items: stretch !important;
}
.align-content-xl-start {
align-content: flex-start !important;
}
.align-content-xl-end {
align-content: flex-end !important;
}
.align-content-xl-center {
align-content: center !important;
}
.align-content-xl-between {
align-content: space-between !important;
}
.align-content-xl-around {
align-content: space-around !important;
}
.align-content-xl-stretch {
align-content: stretch !important;
}
.align-self-xl-auto {
align-self: auto !important;
}
.align-self-xl-start {
align-self: flex-start !important;
}
.align-self-xl-end {
align-self: flex-end !important;
}
.align-self-xl-center {
align-self: center !important;
}
.align-self-xl-baseline {
align-self: baseline !important;
}
.align-self-xl-stretch {
align-self: stretch !important;
}
.order-xl-first {
order: -1 !important;
}
.order-xl-0 {
order: 0 !important;
}
.order-xl-1 {
order: 1 !important;
}
.order-xl-2 {
order: 2 !important;
}
.order-xl-3 {
order: 3 !important;
}
.order-xl-4 {
order: 4 !important;
}
.order-xl-5 {
order: 5 !important;
}
.order-xl-last {
order: 6 !important;
}
.m-xl-0 {
margin: 0 !important;
}
.m-xl-1 {
margin: 0.25rem !important;
}
.m-xl-2 {
margin: 0.5rem !important;
}
.m-xl-3 {
margin: 1rem !important;
}
.m-xl-4 {
margin: 1.5rem !important;
}
.m-xl-5 {
margin: 3rem !important;
}
.m-xl-auto {
margin: auto !important;
}
.mx-xl-0 {
margin-right: 0 !important;
margin-left: 0 !important;
}
.mx-xl-1 {
margin-right: 0.25rem !important;
margin-left: 0.25rem !important;
}
.mx-xl-2 {
margin-right: 0.5rem !important;
margin-left: 0.5rem !important;
}
.mx-xl-3 {
margin-right: 1rem !important;
margin-left: 1rem !important;
}
.mx-xl-4 {
margin-right: 1.5rem !important;
margin-left: 1.5rem !important;
}
.mx-xl-5 {
margin-right: 3rem !important;
margin-left: 3rem !important;
}
.mx-xl-auto {
margin-right: auto !important;
margin-left: auto !important;
}
.my-xl-0 {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
.my-xl-1 {
margin-top: 0.25rem !important;
margin-bottom: 0.25rem !important;
}
.my-xl-2 {
margin-top: 0.5rem !important;
margin-bottom: 0.5rem !important;
}
.my-xl-3 {
margin-top: 1rem !important;
margin-bottom: 1rem !important;
}
.my-xl-4 {
margin-top: 1.5rem !important;
margin-bottom: 1.5rem !important;
}
.my-xl-5 {
margin-top: 3rem !important;
margin-bottom: 3rem !important;
}
.my-xl-auto {
margin-top: auto !important;
margin-bottom: auto !important;
}
.mt-xl-0 {
margin-top: 0 !important;
}
.mt-xl-1 {
margin-top: 0.25rem !important;
}
.mt-xl-2 {
margin-top: 0.5rem !important;
}
.mt-xl-3 {
margin-top: 1rem !important;
}
.mt-xl-4 {
margin-top: 1.5rem !important;
}
.mt-xl-5 {
margin-top: 3rem !important;
}
.mt-xl-auto {
margin-top: auto !important;
}
.me-xl-0 {
margin-right: 0 !important;
}
.me-xl-1 {
margin-right: 0.25rem !important;
}
.me-xl-2 {
margin-right: 0.5rem !important;
}
.me-xl-3 {
margin-right: 1rem !important;
}
.me-xl-4 {
margin-right: 1.5rem !important;
}
.me-xl-5 {
margin-right: 3rem !important;
}
.me-xl-auto {
margin-right: auto !important;
}
.mb-xl-0 {
margin-bottom: 0 !important;
}
.mb-xl-1 {
margin-bottom: 0.25rem !important;
}
.mb-xl-2 {
margin-bottom: 0.5rem !important;
}
.mb-xl-3 {
margin-bottom: 1rem !important;
}
.mb-xl-4 {
margin-bottom: 1.5rem !important;
}
.mb-xl-5 {
margin-bottom: 3rem !important;
}
.mb-xl-auto {
margin-bottom: auto !important;
}
.ms-xl-0 {
margin-left: 0 !important;
}
.ms-xl-1 {
margin-left: 0.25rem !important;
}
.ms-xl-2 {
margin-left: 0.5rem !important;
}
.ms-xl-3 {
margin-left: 1rem !important;
}
.ms-xl-4 {
margin-left: 1.5rem !important;
}
.ms-xl-5 {
margin-left: 3rem !important;
}
.ms-xl-auto {
margin-left: auto !important;
}
.p-xl-0 {
padding: 0 !important;
}
.p-xl-1 {
padding: 0.25rem !important;
}
.p-xl-2 {
padding: 0.5rem !important;
}
.p-xl-3 {
padding: 1rem !important;
}
.p-xl-4 {
padding: 1.5rem !important;
}
.p-xl-5 {
padding: 3rem !important;
}
.px-xl-0 {
padding-right: 0 !important;
padding-left: 0 !important;
}
.px-xl-1 {
padding-right: 0.25rem !important;
padding-left: 0.25rem !important;
}
.px-xl-2 {
padding-right: 0.5rem !important;
padding-left: 0.5rem !important;
}
.px-xl-3 {
padding-right: 1rem !important;
padding-left: 1rem !important;
}
.px-xl-4 {
padding-right: 1.5rem !important;
padding-left: 1.5rem !important;
}
.px-xl-5 {
padding-right: 3rem !important;
padding-left: 3rem !important;
}
.py-xl-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
.py-xl-1 {
padding-top: 0.25rem !important;
padding-bottom: 0.25rem !important;
}
.py-xl-2 {
padding-top: 0.5rem !important;
padding-bottom: 0.5rem !important;
}
.py-xl-3 {
padding-top: 1rem !important;
padding-bottom: 1rem !important;
}
.py-xl-4 {
padding-top: 1.5rem !important;
padding-bottom: 1.5rem !important;
}
.py-xl-5 {
padding-top: 3rem !important;
padding-bottom: 3rem !important;
}
.pt-xl-0 {
padding-top: 0 !important;
}
.pt-xl-1 {
padding-top: 0.25rem !important;
}
.pt-xl-2 {
padding-top: 0.5rem !important;
}
.pt-xl-3 {
padding-top: 1rem !important;
}
.pt-xl-4 {
padding-top: 1.5rem !important;
}
.pt-xl-5 {
padding-top: 3rem !important;
}
.pe-xl-0 {
padding-right: 0 !important;
}
.pe-xl-1 {
padding-right: 0.25rem !important;
}
.pe-xl-2 {
padding-right: 0.5rem !important;
}
.pe-xl-3 {
padding-right: 1rem !important;
}
.pe-xl-4 {
padding-right: 1.5rem !important;
}
.pe-xl-5 {
padding-right: 3rem !important;
}
.pb-xl-0 {
padding-bottom: 0 !important;
}
.pb-xl-1 {
padding-bottom: 0.25rem !important;
}
.pb-xl-2 {
padding-bottom: 0.5rem !important;
}
.pb-xl-3 {
padding-bottom: 1rem !important;
}
.pb-xl-4 {
padding-bottom: 1.5rem !important;
}
.pb-xl-5 {
padding-bottom: 3rem !important;
}
.ps-xl-0 {
padding-left: 0 !important;
}
.ps-xl-1 {
padding-left: 0.25rem !important;
}
.ps-xl-2 {
padding-left: 0.5rem !important;
}
.ps-xl-3 {
padding-left: 1rem !important;
}
.ps-xl-4 {
padding-left: 1.5rem !important;
}
.ps-xl-5 {
padding-left: 3rem !important;
}
.text-xl-start {
text-align: left !important;
}
.text-xl-end {
text-align: right !important;
}
.text-xl-center {
text-align: center !important;
}
}
@media (min-width: 1400px) {
.float-xxl-start {
float: left !important;
}
.float-xxl-end {
float: right !important;
}
.float-xxl-none {
float: none !important;
}
.d-xxl-inline {
display: inline !important;
}
.d-xxl-inline-block {
display: inline-block !important;
}
.d-xxl-block {
display: block !important;
}
.d-xxl-grid {
display: grid !important;
}
.d-xxl-table {
display: table !important;
}
.d-xxl-table-row {
display: table-row !important;
}
.d-xxl-table-cell {
display: table-cell !important;
}
.d-xxl-flex {
display: flex !important;
}
.d-xxl-inline-flex {
display: inline-flex !important;
}
.d-xxl-none {
display: none !important;
}
.flex-xxl-fill {
flex: 1 1 auto !important;
}
.flex-xxl-row {
flex-direction: row !important;
}
.flex-xxl-column {
flex-direction: column !important;
}
.flex-xxl-row-reverse {
flex-direction: row-reverse !important;
}
.flex-xxl-column-reverse {
flex-direction: column-reverse !important;
}
.flex-xxl-grow-0 {
flex-grow: 0 !important;
}
.flex-xxl-grow-1 {
flex-grow: 1 !important;
}
.flex-xxl-shrink-0 {
flex-shrink: 0 !important;
}
.flex-xxl-shrink-1 {
flex-shrink: 1 !important;
}
.flex-xxl-wrap {
flex-wrap: wrap !important;
}
.flex-xxl-nowrap {
flex-wrap: nowrap !important;
}
.flex-xxl-wrap-reverse {
flex-wrap: wrap-reverse !important;
}
.gap-xxl-0 {
gap: 0 !important;
}
.gap-xxl-1 {
gap: 0.25rem !important;
}
.gap-xxl-2 {
gap: 0.5rem !important;
}
.gap-xxl-3 {
gap: 1rem !important;
}
.gap-xxl-4 {
gap: 1.5rem !important;
}
.gap-xxl-5 {
gap: 3rem !important;
}
.justify-content-xxl-start {
justify-content: flex-start !important;
}
.justify-content-xxl-end {
justify-content: flex-end !important;
}
.justify-content-xxl-center {
justify-content: center !important;
}
.justify-content-xxl-between {
justify-content: space-between !important;
}
.justify-content-xxl-around {
justify-content: space-around !important;
}
.justify-content-xxl-evenly {
justify-content: space-evenly !important;
}
.align-items-xxl-start {
align-items: flex-start !important;
}
.align-items-xxl-end {
align-items: flex-end !important;
}
.align-items-xxl-center {
align-items: center !important;
}
.align-items-xxl-baseline {
align-items: baseline !important;
}
.align-items-xxl-stretch {
align-items: stretch !important;
}
.align-content-xxl-start {
align-content: flex-start !important;
}
.align-content-xxl-end {
align-content: flex-end !important;
}
.align-content-xxl-center {
align-content: center !important;
}
.align-content-xxl-between {
align-content: space-between !important;
}
.align-content-xxl-around {
align-content: space-around !important;
}
.align-content-xxl-stretch {
align-content: stretch !important;
}
.align-self-xxl-auto {
align-self: auto !important;
}
.align-self-xxl-start {
align-self: flex-start !important;
}
.align-self-xxl-end {
align-self: flex-end !important;
}
.align-self-xxl-center {
align-self: center !important;
}
.align-self-xxl-baseline {
align-self: baseline !important;
}
.align-self-xxl-stretch {
align-self: stretch !important;
}
.order-xxl-first {
order: -1 !important;
}
.order-xxl-0 {
order: 0 !important;
}
.order-xxl-1 {
order: 1 !important;
}
.order-xxl-2 {
order: 2 !important;
}
.order-xxl-3 {
order: 3 !important;
}
.order-xxl-4 {
order: 4 !important;
}
.order-xxl-5 {
order: 5 !important;
}
.order-xxl-last {
order: 6 !important;
}
.m-xxl-0 {
margin: 0 !important;
}
.m-xxl-1 {
margin: 0.25rem !important;
}
.m-xxl-2 {
margin: 0.5rem !important;
}
.m-xxl-3 {
margin: 1rem !important;
}
.m-xxl-4 {
margin: 1.5rem !important;
}
.m-xxl-5 {
margin: 3rem !important;
}
.m-xxl-auto {
margin: auto !important;
}
.mx-xxl-0 {
margin-right: 0 !important;
margin-left: 0 !important;
}
.mx-xxl-1 {
margin-right: 0.25rem !important;
margin-left: 0.25rem !important;
}
.mx-xxl-2 {
margin-right: 0.5rem !important;
margin-left: 0.5rem !important;
}
.mx-xxl-3 {
margin-right: 1rem !important;
margin-left: 1rem !important;
}
.mx-xxl-4 {
margin-right: 1.5rem !important;
margin-left: 1.5rem !important;
}
.mx-xxl-5 {
margin-right: 3rem !important;
margin-left: 3rem !important;
}
.mx-xxl-auto {
margin-right: auto !important;
margin-left: auto !important;
}
.my-xxl-0 {
margin-top: 0 !important;
margin-bottom: 0 !important;
}
.my-xxl-1 {
margin-top: 0.25rem !important;
margin-bottom: 0.25rem !important;
}
.my-xxl-2 {
margin-top: 0.5rem !important;
margin-bottom: 0.5rem !important;
}
.my-xxl-3 {
margin-top: 1rem !important;
margin-bottom: 1rem !important;
}
.my-xxl-4 {
margin-top: 1.5rem !important;
margin-bottom: 1.5rem !important;
}
.my-xxl-5 {
margin-top: 3rem !important;
margin-bottom: 3rem !important;
}
.my-xxl-auto {
margin-top: auto !important;
margin-bottom: auto !important;
}
.mt-xxl-0 {
margin-top: 0 !important;
}
.mt-xxl-1 {
margin-top: 0.25rem !important;
}
.mt-xxl-2 {
margin-top: 0.5rem !important;
}
.mt-xxl-3 {
margin-top: 1rem !important;
}
.mt-xxl-4 {
margin-top: 1.5rem !important;
}
.mt-xxl-5 {
margin-top: 3rem !important;
}
.mt-xxl-auto {
margin-top: auto !important;
}
.me-xxl-0 {
margin-right: 0 !important;
}
.me-xxl-1 {
margin-right: 0.25rem !important;
}
.me-xxl-2 {
margin-right: 0.5rem !important;
}
.me-xxl-3 {
margin-right: 1rem !important;
}
.me-xxl-4 {
margin-right: 1.5rem !important;
}
.me-xxl-5 {
margin-right: 3rem !important;
}
.me-xxl-auto {
margin-right: auto !important;
}
.mb-xxl-0 {
margin-bottom: 0 !important;
}
.mb-xxl-1 {
margin-bottom: 0.25rem !important;
}
.mb-xxl-2 {
margin-bottom: 0.5rem !important;
}
.mb-xxl-3 {
margin-bottom: 1rem !important;
}
.mb-xxl-4 {
margin-bottom: 1.5rem !important;
}
.mb-xxl-5 {
margin-bottom: 3rem !important;
}
.mb-xxl-auto {
margin-bottom: auto !important;
}
.ms-xxl-0 {
margin-left: 0 !important;
}
.ms-xxl-1 {
margin-left: 0.25rem !important;
}
.ms-xxl-2 {
margin-left: 0.5rem !important;
}
.ms-xxl-3 {
margin-left: 1rem !important;
}
.ms-xxl-4 {
margin-left: 1.5rem !important;
}
.ms-xxl-5 {
margin-left: 3rem !important;
}
.ms-xxl-auto {
margin-left: auto !important;
}
.p-xxl-0 {
padding: 0 !important;
}
.p-xxl-1 {
padding: 0.25rem !important;
}
.p-xxl-2 {
padding: 0.5rem !important;
}
.p-xxl-3 {
padding: 1rem !important;
}
.p-xxl-4 {
padding: 1.5rem !important;
}
.p-xxl-5 {
padding: 3rem !important;
}
.px-xxl-0 {
padding-right: 0 !important;
padding-left: 0 !important;
}
.px-xxl-1 {
padding-right: 0.25rem !important;
padding-left: 0.25rem !important;
}
.px-xxl-2 {
padding-right: 0.5rem !important;
padding-left: 0.5rem !important;
}
.px-xxl-3 {
padding-right: 1rem !important;
padding-left: 1rem !important;
}
.px-xxl-4 {
padding-right: 1.5rem !important;
padding-left: 1.5rem !important;
}
.px-xxl-5 {
padding-right: 3rem !important;
padding-left: 3rem !important;
}
.py-xxl-0 {
padding-top: 0 !important;
padding-bottom: 0 !important;
}
.py-xxl-1 {
padding-top: 0.25rem !important;
padding-bottom: 0.25rem !important;
}
.py-xxl-2 {
padding-top: 0.5rem !important;
padding-bottom: 0.5rem !important;
}
.py-xxl-3 {
padding-top: 1rem !important;
padding-bottom: 1rem !important;
}
.py-xxl-4 {
padding-top: 1.5rem !important;
padding-bottom: 1.5rem !important;
}
.py-xxl-5 {
padding-top: 3rem !important;
padding-bottom: 3rem !important;
}
.pt-xxl-0 {
padding-top: 0 !important;
}
.pt-xxl-1 {
padding-top: 0.25rem !important;
}
.pt-xxl-2 {
padding-top: 0.5rem !important;
}
.pt-xxl-3 {
padding-top: 1rem !important;
}
.pt-xxl-4 {
padding-top: 1.5rem !important;
}
.pt-xxl-5 {
padding-top: 3rem !important;
}
.pe-xxl-0 {
padding-right: 0 !important;
}
.pe-xxl-1 {
padding-right: 0.25rem !important;
}
.pe-xxl-2 {
padding-right: 0.5rem !important;
}
.pe-xxl-3 {
padding-right: 1rem !important;
}
.pe-xxl-4 {
padding-right: 1.5rem !important;
}
.pe-xxl-5 {
padding-right: 3rem !important;
}
.pb-xxl-0 {
padding-bottom: 0 !important;
}
.pb-xxl-1 {
padding-bottom: 0.25rem !important;
}
.pb-xxl-2 {
padding-bottom: 0.5rem !important;
}
.pb-xxl-3 {
padding-bottom: 1rem !important;
}
.pb-xxl-4 {
padding-bottom: 1.5rem !important;
}
.pb-xxl-5 {
padding-bottom: 3rem !important;
}
.ps-xxl-0 {
padding-left: 0 !important;
}
.ps-xxl-1 {
padding-left: 0.25rem !important;
}
.ps-xxl-2 {
padding-left: 0.5rem !important;
}
.ps-xxl-3 {
padding-left: 1rem !important;
}
.ps-xxl-4 {
padding-left: 1.5rem !important;
}
.ps-xxl-5 {
padding-left: 3rem !important;
}
.text-xxl-start {
text-align: left !important;
}
.text-xxl-end {
text-align: right !important;
}
.text-xxl-center {
text-align: center !important;
}
}
@media (min-width: 1200px) {
.fs-1 {
font-size: 2.5rem !important;
}
.fs-2 {
font-size: 2rem !important;
}
.fs-3 {
font-size: 1.75rem !important;
}
.fs-4 {
font-size: 1.5rem !important;
}
}
@media print {
.d-print-inline {
display: inline !important;
}
.d-print-inline-block {
display: inline-block !important;
}
.d-print-block {
display: block !important;
}
.d-print-grid {
display: grid !important;
}
.d-print-table {
display: table !important;
}
.d-print-table-row {
display: table-row !important;
}
.d-print-table-cell {
display: table-cell !important;
}
.d-print-flex {
display: flex !important;
}
.d-print-inline-flex {
display: inline-flex !important;
}
.d-print-none {
display: none !important;
}
}
.feature {
display: inline-flex;
align-items: center;
justify-content: center;
height: 4rem;
width: 4rem;
font-size: 2rem;
}
| transformers.js/examples/demo-site/src/theme.css/0 | {
"file_path": "transformers.js/examples/demo-site/src/theme.css",
"repo_id": "transformers.js",
"token_count": 109692
} |
/* Styles go here */
* {
padding: 0;
margin: 0;
box-sizing: border-box;
font-family: 'Roboto', sans-serif;
}
h1 {
font-size: 54px;
text-align: center;
font-weight: 500;
}
h2 {
font-size: 24px;
text-align: center;
font-weight: 400;
margin-bottom: 16px;
}
.container {
width: 400px;
}
html,
body {
height: 100%;
}
body {
display: flex;
justify-content: center;
align-items: center;
}
#text {
width: 100%;
padding: 8px;
font-size: 20px;
margin-bottom: 8px;
}
#output {
font-size: 20px;
font-family: 'Roboto Mono', monospace;
height: 100px;
} | transformers.js/examples/electron/src/index.css/0 | {
"file_path": "transformers.js/examples/electron/src/index.css",
"repo_id": "transformers.js",
"token_count": 300
} |
import path from 'path';
import { fileURLToPath } from 'url';
import HtmlWebpackPlugin from 'html-webpack-plugin';
import CopyPlugin from 'copy-webpack-plugin';
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const config = {
mode: 'development',
devtool: 'inline-source-map',
entry: {
background: './src/background.js',
popup: './src/popup.js',
content: './src/content.js',
},
output: {
path: path.resolve(__dirname, 'build'),
filename: '[name].js',
},
plugins: [
new HtmlWebpackPlugin({
template: './src/popup.html',
filename: 'popup.html',
}),
new CopyPlugin({
patterns: [
{
from: "public",
to: "." // Copies to build folder
},
{
from: "src/popup.css",
to: "popup.css"
}
],
})
],
};
export default config;
| transformers.js/examples/extension/webpack.config.js/0 | {
"file_path": "transformers.js/examples/extension/webpack.config.js",
"repo_id": "transformers.js",
"token_count": 536
} |
/** @type {import('next').NextConfig} */
const nextConfig = {
// (Optional) Export as a static site
// See https://nextjs.org/docs/pages/building-your-application/deploying/static-exports#configuration
output: 'export', // Feel free to modify/remove this option
// Override the default webpack configuration
webpack: (config) => {
// Ignore node-specific modules when bundling for the browser
// See https://webpack.js.org/configuration/resolve/#resolvealias
config.resolve.alias = {
...config.resolve.alias,
"sharp$": false,
"onnxruntime-node$": false,
}
return config;
},
};
module.exports = nextConfig;
| transformers.js/examples/next-client/next.config.js/0 | {
"file_path": "transformers.js/examples/next-client/next.config.js",
"repo_id": "transformers.js",
"token_count": 270
} |
const http = require('http');
const querystring = require('querystring');
const url = require('url');
class MyClassificationPipeline {
static task = 'text-classification';
static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';
static instance = null;
static async getInstance(progress_callback = null) {
if (this.instance === null) {
// Dynamically import the Transformers.js library
let { pipeline, env } = await import('@xenova/transformers');
// NOTE: Uncomment this to change the cache directory
// env.cacheDir = './.cache';
this.instance = pipeline(this.task, this.model, { progress_callback });
}
return this.instance;
}
}
// Comment out this line if you don't want to start loading the model as soon as the server starts.
// If commented out, the model will be loaded when the first request is received (i.e,. lazily).
MyClassificationPipeline.getInstance();
// Define the HTTP server
const server = http.createServer();
const hostname = '127.0.0.1';
const port = 3000;
// Listen for requests made to the server
server.on('request', async (req, res) => {
// Parse the request URL
const parsedUrl = url.parse(req.url);
// Extract the query parameters
const { text } = querystring.parse(parsedUrl.query);
// Set the response headers
res.setHeader('Content-Type', 'application/json');
let response;
if (parsedUrl.pathname === '/classify' && text) {
const classifier = await MyClassificationPipeline.getInstance();
response = await classifier(text);
res.statusCode = 200;
} else {
response = { 'error': 'Bad request' }
res.statusCode = 400;
}
// Send the JSON response
res.end(JSON.stringify(response));
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
| transformers.js/examples/node/commonjs/app.js/0 | {
"file_path": "transformers.js/examples/node/commonjs/app.js",
"repo_id": "transformers.js",
"token_count": 583
} |
import Scatterplot from 'deepscatter';
import { getCachedJSON } from './utils';
// Start loading metadata and positions asynchronously as soon as possible.
let metadata = {};
getCachedJSON('https://huggingface.co/datasets/Xenova/MusicBenchEmbedded/resolve/main/metadata.json')
.then((data) => {
metadata = data;
})
.catch((e) => console.error(e));
let positions = {};
getCachedJSON('https://huggingface.co/datasets/Xenova/MusicBenchEmbedded/resolve/main/positions.json')
.then((data) => {
positions = data;
})
.catch((e) => console.error(e));
const scatterplot = new Scatterplot('#deepscatter');
window.scatterplot = scatterplot; // For debugging
// Initial call
scatterplot.plotAPI({
source_url: 'https://huggingface.co/datasets/Xenova/MusicBenchEmbedded/resolve/main/atlas',
max_points: 52768, // a full cap.
alpha: 35, // Target saturation for the full page.
zoom_balance: 0.5, // Rate at which points increase size. https://observablehq.com/@bmschmidt/zoom-strategies-for-huge-scatterplots-with-three-js
point_size: 3, // Default point size before application of size scaling
background_color: 'transparent',
encoding: {
// TODO Add colours
x: {
field: 'x',
transform: 'literal',
},
y: {
field: 'y',
transform: 'literal',
},
jitter_radius: {
method: 'uniform',
constant: 5,
},
},
tooltip_opacity: 1,
duration: 4000, // For slow initial transition
}).then(_ => scatterplot.plotAPI({
encoding: {
jitter_radius: {
method: 'uniform',
constant: 0,
},
},
}));
// Custom hover function
scatterplot.tooltip_html = (datum) => {
const item = metadata[datum.ix];
if (!item) return 'Loading...';
const location = item.location;
setTimeout(() => {
// Slight hack to append the audio element after the text
const tooltip = document.querySelector('.tooltip');
if (tooltip) {
tooltip.innerHTML = `
${item.main_caption}
<br>
<audio id="tooltip-audio" controls src="https://huggingface.co/datasets/Xenova/MusicBenchEmbedded/resolve/main/audio/${location}"></audio>
`;
}
}, 0);
return item.main_caption;
};
// Make references to DOM elements
const OVERLAY = document.getElementById('overlay');
const INPUT_ELEMENT = document.getElementById('query');
const SEARCH_BUTTON = document.getElementById('search');
// Set up worker
const worker = new Worker(new URL('./worker.js', import.meta.url), {
type: 'module'
});
worker.addEventListener('message', (e) => {
switch (e.data.status) {
case 'initiate':
OVERLAY.innerText = 'Loading model and embeddings database...';
OVERLAY.style.display = 'flex';
OVERLAY.style.pointerEvents = 'all';
break;
case 'ready':
OVERLAY.style.display = 'none';
OVERLAY.style.pointerEvents = 'none';
break;
case 'complete':
// Output is an array of [score, index] pairs. Get top item
const index = e.data.output[0][1];
const position = positions[index];
if (position) { // Just in case the position hasn't loaded yet (this should never happen)
// Zoom to result
scatterplot.plotAPI({
zoom: {
bbox: {
x: [position[0] - 0.5, position[0] + 0.5],
y: [position[1] - 0.5, position[1] + 0.5]
}
},
duration: 2000,
})
}
break;
}
});
const search = () => {
worker.postMessage({
status: 'search',
query: INPUT_ELEMENT.value,
});
};
// Set up event listeners
INPUT_ELEMENT.addEventListener('keypress', (event) => {
if (event.keyCode === 13) search();
});
SEARCH_BUTTON.addEventListener('click', search);
| transformers.js/examples/semantic-audio-search/index.js/0 | {
"file_path": "transformers.js/examples/semantic-audio-search/index.js",
"repo_id": "transformers.js",
"token_count": 1844
} |
'use client'
import Image from 'next/image'
import { blurHashToDataURL } from '../utils.js'
export function ImageGrid({ images, setCurrentImage }) {
return (
<div className="columns-2 gap-4 sm:columns-3 xl:columns-4 2xl:columns-5">
{images && images.map(({ id, url, ar, blur }) => (
<div
key={id}
href={`https://unsplash.com/photos/${id}`}
className='after:content group cursor-pointer relative mb-4 block w-full after:pointer-events-none after:absolute after:inset-0 after:rounded-lg after:shadow-highlight'
onClick={() => {
setCurrentImage({ id, url, ar, blur });
}}
>
<Image
alt=''
className="transform rounded-lg brightness-90 transition will-change-auto group-hover:brightness-110"
style={{ transform: 'translate3d(0, 0, 0)' }}
placeholder="blur"
blurDataURL={blurHashToDataURL(blur)}
src={`https://images.unsplash.com/${url}?auto=format&fit=crop&w=480&q=80`}
width={480}
height={480 / ar}
unoptimized={true}
/>
</div>
))}
</div>)
} | transformers.js/examples/semantic-image-search-client/src/app/components/ImageGrid.jsx/0 | {
"file_path": "transformers.js/examples/semantic-image-search-client/src/app/components/ImageGrid.jsx",
"repo_id": "transformers.js",
"token_count": 791
} |
/** @type {import('next').NextConfig} */
const nextConfig = {
// (Optional) Export as a standalone site
// See https://nextjs.org/docs/pages/api-reference/next-config-js/output#automatically-copying-traced-files
output: 'standalone', // Feel free to modify/remove this option
// Indicate that these packages should not be bundled by webpack
experimental: {
serverComponentsExternalPackages: ['sharp', 'onnxruntime-node'],
},
// Define which domains we are allowed to load images from
images: {
domains: ['images.unsplash.com'],
},
};
module.exports = nextConfig;
| transformers.js/examples/semantic-image-search/next.config.js/0 | {
"file_path": "transformers.js/examples/semantic-image-search/next.config.js",
"repo_id": "transformers.js",
"token_count": 207
} |
import { decode } from "blurhash"
const SIZE = 32;
export function blurHashToDataURL(hash) {
if (!hash) return undefined
const pixels = decode(hash, SIZE, SIZE)
const canvas = document.createElement("canvas");
canvas.width = SIZE;
canvas.height = SIZE;
const ctx = canvas.getContext("2d");
const imageData = ctx.createImageData(SIZE, SIZE);
imageData.data.set(pixels);
ctx.putImageData(imageData, 0, 0);
return canvas.toDataURL();
}
function downloadData(url, filename) {
// Create an anchor element with the data URL as the href attribute
const downloadLink = document.createElement('a');
downloadLink.href = url;
// Set the download attribute to specify the desired filename for the downloaded image
downloadLink.download = filename;
// Trigger the download
downloadLink.click();
// Clean up: remove the anchor element from the DOM
downloadLink.remove();
}
export function downloadImage(url, filename) {
fetch(url, {
headers: new Headers({
Origin: location.origin,
}),
mode: 'cors',
})
.then((response) => response.blob())
.then((blob) => {
let blobUrl = window.URL.createObjectURL(blob)
downloadData(blobUrl, filename)
})
.catch((e) => console.error(e))
}
| transformers.js/examples/semantic-image-search/src/app/utils.js/0 | {
"file_path": "transformers.js/examples/semantic-image-search/src/app/utils.js",
"repo_id": "transformers.js",
"token_count": 500
} |
* {
box-sizing: border-box;
padding: 0;
margin: 0;
font-family: sans-serif;
}
html,
body {
height: 100%;
}
body {
padding: 16px 32px;
}
body,
#container {
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
}
#controls {
display: flex;
padding: 1rem;
gap: 1rem;
}
#controls>div {
text-align: center;
}
h1,
h4 {
text-align: center;
}
h4 {
margin-top: 0.5rem;
}
#container {
position: relative;
width: 720px;
height: 405px;
max-width: 100%;
max-height: 100%;
border: 2px dashed #D1D5DB;
border-radius: 0.75rem;
overflow: hidden;
margin-top: 1rem;
background-size: 100% 100%;
background-position: center;
background-repeat: no-repeat;
}
#overlay,
canvas {
position: absolute;
width: 100%;
height: 100%;
}
#status {
min-height: 16px;
margin: 8px 0;
}
.bounding-box {
position: absolute;
box-sizing: border-box;
border: solid 2px;
}
.bounding-box-label {
color: white;
position: absolute;
font-size: 12px;
margin: -16px 0 0 -2px;
padding: 1px;
}
#video, #canvas {
display: none;
}
| transformers.js/examples/webgpu-video-background-removal/style.css/0 | {
"file_path": "transformers.js/examples/webgpu-video-background-removal/style.css",
"repo_id": "transformers.js",
"token_count": 462
} |
@scope (.markdown) {
/* Code blocks */
pre {
margin: 0.5rem 0;
white-space: break-spaces;
}
code {
padding: 0.2em 0.4em;
border-radius: 4px;
font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace;
font-size: 0.9em;
}
pre,
code {
background-color: #f2f2f2;
}
@media (prefers-color-scheme: dark) {
pre,
code {
background-color: #333;
}
}
pre:has(code) {
padding: 1rem 0.5rem;
}
pre>code {
padding: 0;
}
/* Headings */
h1,
h2,
h3,
h4,
h5,
h6 {
font-weight: 600;
line-height: 1.2;
}
h1 {
font-size: 2em;
margin: 1rem 0;
}
h2 {
font-size: 1.5em;
margin: 0.83rem 0;
}
h3 {
font-size: 1.25em;
margin: 0.67rem 0;
}
h4 {
font-size: 1em;
margin: 0.5rem 0;
}
h5 {
font-size: 0.875em;
margin: 0.33rem 0;
}
h6 {
font-size: 0.75em;
margin: 0.25rem 0;
}
h1,
h2,
h3,
h4,
h5,
h6:first-child {
margin-top: 0;
}
/* Unordered List */
ul {
list-style-type: disc;
margin-left: 1.5rem;
}
/* Ordered List */
ol {
list-style-type: decimal;
margin-left: 1.5rem;
}
/* List Items */
li {
margin: 0.25rem 0;
}
p:not(:first-child) {
margin-top: 0.75rem;
}
p:not(:last-child) {
margin-bottom: 0.75rem;
}
} | transformers.js/examples/webgpu-vlm/src/components/Chat.css/0 | {
"file_path": "transformers.js/examples/webgpu-vlm/src/components/Chat.css",
"repo_id": "transformers.js",
"token_count": 947
} |
import json
import os
import shutil
from dataclasses import dataclass, field, asdict
from typing import Optional
from enum import Enum
from transformers import (
AutoConfig,
AutoTokenizer,
HfArgumentParser
)
import onnxslim
from optimum.exporters.onnx import main_export, export_models
from optimum.onnx.graph_transformations import check_and_save_model
from optimum.exporters.tasks import TasksManager
from .quantize import QuantizationArguments, quantize
NO_PER_CHANNEL_REDUCE_RANGE_MODELS = {
# Decoder-only models
'codegen',
'gpt2',
'gpt_bigcode',
'gptj',
'gpt-neo',
'gpt-neox',
'mpt',
'bloom',
'llama',
'gemma',
'opt',
'mistral',
'falcon',
'phi',
'phi3',
'qwen2',
'stablelm',
'starcoder2',
'openelm',
'mobilellm',
'olmo',
# Encoder-decoder models
'whisper',
'vision-encoder-decoder',
# Encoder-only models
'owlv2',
'wavlm',
'wav2vec2',
'unispeech',
'unispeech-sat',
}
MODELS_WITHOUT_TOKENIZERS = [
'wav2vec2',
'wav2vec2-bert',
'wavlm',
'hubert',
'unispeech',
'unispeech-sat',
]
class QuantMode(Enum):
# F32 = 'fp32'
FP16 = 'fp16'
Q8 = 'q8'
QI8 = 'int8'
QU8 = 'uint8'
Q4 = 'q4'
BNB4 = 'bnb4'
@dataclass
class ConversionArguments:
"""
Arguments used for converting HuggingFace models to onnx.
"""
model_id: str = field(
metadata={
"help": "Model identifier"
}
)
tokenizer_id: str = field(
default=None,
metadata={
"help": "Tokenizer identifier (if different to `model_id`)"
}
)
quantize: bool = field(
default=False,
metadata={
"help": "Whether to quantize the model."
}
)
output_parent_dir: str = field(
default='./models/',
metadata={
"help": "Path where the converted model will be saved to."
}
)
task: Optional[str] = field(
default='auto',
metadata={
"help": (
"The task to export the model for. If not specified, the task will be auto-inferred based on the model. Available tasks depend on the model, but are among:"
f" {str(TasksManager.get_all_tasks())}. For decoder models, use `xxx-with-past` to export the model using past key values in the decoder."
)
}
)
library_name: Optional[str] = field(
default=None,
metadata={
"help": (
"The library name to use for the export. If not specified, the library name will be auto-inferred based on the model."
)
}
)
variant: Optional[str] = field(
default='default',
metadata={
"help": "The variant of the ONNX export to use."
}
)
opset: int = field(
default=None,
metadata={
"help": (
"If specified, ONNX opset version to export the model with. Otherwise, the default opset will be used."
)
}
)
device: str = field(
default='cpu',
metadata={
"help": 'The device to use to do the export.'
}
)
skip_validation: bool = field(
default=False,
metadata={
"help": "Whether to skip validation of the converted model"
}
)
output_attentions: bool = field(
default=False,
metadata={
"help": "Whether to output attentions from the model. NOTE: This is only supported for whisper models right now."
}
)
split_modalities: bool = field(
default=False,
metadata={
"help": "Whether to split multimodal models. NOTE: This is only supported for CLIP models right now."
}
)
trust_remote_code: bool = field(
default=False,
metadata={
"help": "Allows to use custom code for the modeling hosted in the model repository. This option should only be set for repositories"
"you trust and in which you have read the code, as it will execute on your local machine arbitrary code present in the model repository."
}
)
custom_onnx_configs: str = field(
default=None,
metadata={
"help": "Experimental usage: override the default ONNX config used for the given model. This argument may be useful for advanced users "
"that desire a finer-grained control on the export."
}
)
skip_onnxslim: bool = field(
default=False,
metadata={
"help": "Whether or not to skip onnxslim."
}
)
def main():
parser = HfArgumentParser(
(ConversionArguments, QuantizationArguments)
)
conv_args, quantization_args = parser.parse_args_into_dataclasses()
model_id = conv_args.model_id
tokenizer_id = conv_args.tokenizer_id or model_id
output_model_folder = os.path.join(conv_args.output_parent_dir, model_id)
# Create output folder
os.makedirs(output_model_folder, exist_ok=True)
from_pretrained_kwargs = dict(
trust_remote_code=conv_args.trust_remote_code,
)
# Saving the model config
config = AutoConfig.from_pretrained(model_id, **from_pretrained_kwargs)
custom_kwargs = {}
if conv_args.custom_onnx_configs is not None:
if conv_args.task == 'auto':
raise Exception(
'`--task` must be set when exporting with `--custom_onnx_configs`')
custom_onnx_configs = json.loads(conv_args.custom_onnx_configs)
for key in custom_onnx_configs:
onnx_configs = TasksManager._SUPPORTED_MODEL_TYPE[custom_onnx_configs[key]]['onnx']
mapping = onnx_configs[conv_args.task]
new_kwargs = {}
if conv_args.task.startswith('text-generation'):
new_kwargs['use_past_in_inputs'] = True
custom_onnx_configs[key] = mapping.func(
config, **mapping.keywords, **new_kwargs)
custom_kwargs['custom_onnx_configs'] = custom_onnx_configs
tokenizer = None
try:
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id, **from_pretrained_kwargs)
except KeyError:
pass # No Tokenizer
except Exception as e:
if config.model_type not in MODELS_WITHOUT_TOKENIZERS:
raise e
core_export_kwargs = dict(
opset=conv_args.opset,
device=conv_args.device,
trust_remote_code=conv_args.trust_remote_code,
**custom_kwargs,
)
export_kwargs = dict(
model_name_or_path=model_id,
output=output_model_folder,
task=conv_args.task,
do_validation=not conv_args.skip_validation,
_variant=conv_args.variant,
library_name=conv_args.library_name,
**core_export_kwargs,
)
# Handle special cases
if config.model_type == 'marian':
from .extra.marian import generate_tokenizer_json
tokenizer_json = generate_tokenizer_json(model_id, tokenizer)
with open(os.path.join(output_model_folder, 'tokenizer.json'), 'w', encoding='utf-8') as fp:
json.dump(tokenizer_json, fp, indent=4)
elif config.model_type == 'esm':
from .extra.esm import generate_fast_tokenizer
fast_tokenizer = generate_fast_tokenizer(tokenizer)
fast_tokenizer.save(os.path.join(
output_model_folder, 'tokenizer.json'))
elif config.model_type == 'whisper':
if conv_args.output_attentions:
from .extra.whisper import get_main_export_kwargs
export_kwargs.update(
**get_main_export_kwargs(config, "automatic-speech-recognition")
)
elif config.model_type in ('wav2vec2', 'wav2vec2-bert', 'hubert', 'unispeech', 'unispeech-sat'):
if tokenizer is not None:
from .extra.wav2vec2 import generate_tokenizer_json
tokenizer_json = generate_tokenizer_json(tokenizer)
with open(os.path.join(output_model_folder, 'tokenizer.json'), 'w', encoding='utf-8') as fp:
json.dump(tokenizer_json, fp, indent=4)
elif config.model_type == 'vits':
if tokenizer is not None:
from .extra.vits import generate_tokenizer_json
tokenizer_json = generate_tokenizer_json(tokenizer)
with open(os.path.join(output_model_folder, 'tokenizer.json'), 'w', encoding='utf-8') as fp:
json.dump(tokenizer_json, fp, indent=4)
elif config.model_type == 'speecht5':
# TODO allow user to specify vocoder path
export_kwargs["model_kwargs"] = {
"vocoder": "microsoft/speecht5_hifigan"}
if tokenizer is not None:
from .extra.speecht5 import generate_tokenizer_json
tokenizer_json = generate_tokenizer_json(tokenizer)
with open(os.path.join(output_model_folder, 'tokenizer.json'), 'w', encoding='utf-8') as fp:
json.dump(tokenizer_json, fp, indent=4)
elif config.model_type in ('owlvit', 'owlv2'):
# Override default batch size to 1, needed because non-maximum suppression is performed for exporting.
# For more information, see https://github.com/huggingface/optimum/blob/e3b7efb1257c011db907ef40ab340e795cc5684c/optimum/exporters/onnx/model_configs.py#L1028-L1032
export_kwargs['batch_size'] = 1
elif config.model_type == 'openelm':
from .extra.openelm import OpenElmOnnxConfig
config = AutoConfig.from_pretrained(
model_id, trust_remote_code=conv_args.trust_remote_code)
onnx_config = OpenElmOnnxConfig(
config=config,
task="text-generation",
use_past=True,
use_past_in_inputs=True,
)
custom_onnx_configs = {
"model": onnx_config,
}
export_kwargs['task'] = "text-generation-with-past"
export_kwargs['custom_onnx_configs'] = custom_onnx_configs
else:
pass # TODO
# Step 1. convert huggingface model to onnx
if not conv_args.split_modalities:
main_export(**export_kwargs)
else:
custom_export_kwargs = dict(
output_dir=output_model_folder,
**core_export_kwargs,
)
if config.model_type == 'clip':
# Handle special case for exporting text and vision models separately
from .extra.clip import CLIPTextModelWithProjectionOnnxConfig, CLIPVisionModelWithProjectionOnnxConfig
from transformers.models.clip import CLIPTextModelWithProjection, CLIPVisionModelWithProjection
text_model = CLIPTextModelWithProjection.from_pretrained(
model_id, **from_pretrained_kwargs)
vision_model = CLIPVisionModelWithProjection.from_pretrained(
model_id, **from_pretrained_kwargs)
export_models(
models_and_onnx_configs={
"text_model": (text_model, CLIPTextModelWithProjectionOnnxConfig(text_model.config)),
"vision_model": (vision_model, CLIPVisionModelWithProjectionOnnxConfig(vision_model.config)),
},
**custom_export_kwargs,
)
elif config.model_type == 'siglip':
# Handle special case for exporting text and vision models separately
from .extra.siglip import SiglipTextModelOnnxConfig, SiglipVisionModelOnnxConfig
from transformers.models.siglip import SiglipTextModel, SiglipVisionModel
text_model = SiglipTextModel.from_pretrained(
model_id, **from_pretrained_kwargs)
vision_model = SiglipVisionModel.from_pretrained(
model_id, **from_pretrained_kwargs)
export_models(
models_and_onnx_configs={
"text_model": (text_model, SiglipTextModelOnnxConfig(text_model.config)),
"vision_model": (vision_model, SiglipVisionModelOnnxConfig(vision_model.config)),
},
**custom_export_kwargs,
)
# TODO: Enable once https://github.com/huggingface/optimum/pull/1552 is merged
# elif config.model_type == 'clap':
# # Handle special case for exporting text and audio models separately
# from .extra.clap import ClapTextModelWithProjectionOnnxConfig, ClapAudioModelWithProjectionOnnxConfig
# from transformers.models.clap import ClapTextModelWithProjection, ClapAudioModelWithProjection
# text_model = ClapTextModelWithProjection.from_pretrained(model_id, **from_pretrained_kwargs)
# audio_model = ClapAudioModelWithProjection.from_pretrained(model_id, **from_pretrained_kwargs)
# export_models(
# models_and_onnx_configs={
# "text_model": (text_model, ClapTextModelWithProjectionOnnxConfig(text_model.config)),
# "audio_model": (audio_model, ClapAudioModelWithProjectionOnnxConfig(audio_model.config)),
# },
# **custom_export_kwargs,
# )
else:
raise Exception(
f'Unable to export {config.model_type} model with `--split_modalities`.')
os.makedirs(os.path.join(output_model_folder, 'onnx'), exist_ok=True)
if not conv_args.skip_onnxslim:
onnx_models = [os.path.join(output_model_folder, x)
for x in os.listdir(output_model_folder) if x.endswith('.onnx')]
for model in onnx_models:
try:
slimmed_model = onnxslim.slim(model)
check_and_save_model(slimmed_model, model)
except Exception as e:
print(f"Failed to slim {model}: {e}")
# Step 2. (optional, recommended) quantize the converted model for fast inference and to reduce model size.
if conv_args.quantize:
# Possibly update quantize config with model specific defaults
use_per_channel_reduce_range = config.model_type not in NO_PER_CHANNEL_REDUCE_RANGE_MODELS
if quantization_args.per_channel is None:
quantization_args.per_channel = use_per_channel_reduce_range
if quantization_args.reduce_range is None:
quantization_args.reduce_range = use_per_channel_reduce_range
quantize(
output_model_folder,
os.path.join(output_model_folder, 'onnx'),
quantization_args,
)
with open(os.path.join(output_model_folder, 'quantize_config.json'), 'w') as fp:
json.dump(asdict(quantization_args), fp, indent=4)
# Step 3. Move .onnx files to the 'onnx' subfolder
for file in os.listdir(output_model_folder):
if file.endswith(('.onnx', '.onnx_data')):
shutil.move(os.path.join(output_model_folder, file),
os.path.join(output_model_folder, 'onnx', file))
# Step 4. Update the generation config if necessary
if config.model_type == 'whisper':
from transformers import GenerationConfig
from .extra.whisper import get_alignment_heads
generation_config = GenerationConfig.from_pretrained(
model_id, **from_pretrained_kwargs)
generation_config.alignment_heads = get_alignment_heads(config)
generation_config.save_pretrained(output_model_folder)
if __name__ == '__main__':
main()
| transformers.js/scripts/convert.py/0 | {
"file_path": "transformers.js/scripts/convert.py",
"repo_id": "transformers.js",
"token_count": 6945
} |
/**
* @file Processors are used to prepare inputs (e.g., text, image or audio) for a model.
*
* **Example:** Using a `WhisperProcessor` to prepare an audio input for a model.
* ```javascript
* import { AutoProcessor, read_audio } from '@huggingface/transformers';
*
* const processor = await AutoProcessor.from_pretrained('openai/whisper-tiny.en');
* const audio = await read_audio('https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac', 16000);
* const { input_features } = await processor(audio);
* // Tensor {
* // data: Float32Array(240000) [0.4752984642982483, 0.5597258806228638, 0.56434166431427, ...],
* // dims: [1, 80, 3000],
* // type: 'float32',
* // size: 240000,
* // }
* ```
*
* @module processors
*/
import { PROCESSOR_NAME } from '../utils/constants.js';
import {
Callable,
} from '../utils/generic.js';
import { getModelJSON } from '../utils/hub.js';
/**
* @typedef {Object} ProcessorProperties Additional processor-specific properties.
* @typedef {import('../utils/hub.js').PretrainedOptions & ProcessorProperties} PretrainedProcessorOptions
* @typedef {import('../tokenizers.js').PreTrainedTokenizer} PreTrainedTokenizer
*/
/**
* Represents a Processor that extracts features from an input.
*/
export class Processor extends Callable {
static classes = [
'image_processor_class',
'tokenizer_class',
'feature_extractor_class',
]
static uses_processor_config = false;
/**
* Creates a new Processor with the given components
* @param {Object} config
* @param {Record<string, Object>} components
*/
constructor(config, components) {
super();
this.config = config;
this.components = components;
}
/**
* @returns {import('./image_processors_utils.js').ImageProcessor|undefined} The image processor of the processor, if it exists.
*/
get image_processor() {
return this.components.image_processor;
}
/**
* @returns {PreTrainedTokenizer|undefined} The tokenizer of the processor, if it exists.
*/
get tokenizer() {
return this.components.tokenizer;
}
/**
* @returns {import('./feature_extraction_utils.js').FeatureExtractor|undefined} The feature extractor of the processor, if it exists.
*/
get feature_extractor() {
return this.components.feature_extractor;
}
/**
* @param {Parameters<PreTrainedTokenizer['apply_chat_template']>[0]} messages
* @param {Parameters<PreTrainedTokenizer['apply_chat_template']>[1]} options
* @returns {ReturnType<PreTrainedTokenizer['apply_chat_template']>}
*/
apply_chat_template(messages, options = {}) {
if (!this.tokenizer) {
throw new Error('Unable to apply chat template without a tokenizer.');
}
return this.tokenizer.apply_chat_template(messages, {
tokenize: false, // default to false
...options,
});
}
/**
* @param {Parameters<PreTrainedTokenizer['batch_decode']>} args
* @returns {ReturnType<PreTrainedTokenizer['batch_decode']>}
*/
batch_decode(...args) {
if (!this.tokenizer) {
throw new Error('Unable to decode without a tokenizer.');
}
return this.tokenizer.batch_decode(...args);
}
/**
* @param {Parameters<PreTrainedTokenizer['decode']>} args
* @returns {ReturnType<PreTrainedTokenizer['decode']>}
*/
decode(...args) {
if (!this.tokenizer) {
throw new Error('Unable to decode without a tokenizer.');
}
return this.tokenizer.decode(...args);
}
/**
* Calls the feature_extractor function with the given input.
* @param {any} input The input to extract features from.
* @param {...any} args Additional arguments.
* @returns {Promise<any>} A Promise that resolves with the extracted features.
*/
async _call(input, ...args) {
for (const item of [this.image_processor, this.feature_extractor, this.tokenizer]) {
if (item) {
return item(input, ...args);
}
}
throw new Error('No image processor, feature extractor, or tokenizer found.');
}
/**
* Instantiate one of the processor classes of the library from a pretrained model.
*
* The processor class to instantiate is selected based on the `image_processor_type` (or `feature_extractor_type`; legacy)
* property of the config object (either passed as an argument or loaded from `pretrained_model_name_or_path` if possible)
*
* @param {string} pretrained_model_name_or_path The name or path of the pretrained model. Can be either:
* - A string, the *model id* of a pretrained processor hosted inside a model repo on huggingface.co.
* Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
* user or organization name, like `dbmdz/bert-base-german-cased`.
* - A path to a *directory* containing processor files, e.g., `./my_model_directory/`.
* @param {PretrainedProcessorOptions} options Additional options for loading the processor.
*
* @returns {Promise<Processor>} A new instance of the Processor class.
*/
static async from_pretrained(pretrained_model_name_or_path, options) {
const [config, components] = await Promise.all([
// TODO:
this.uses_processor_config
? getModelJSON(pretrained_model_name_or_path, PROCESSOR_NAME, true, options)
: {},
Promise.all(
this.classes
.filter((cls) => cls in this)
.map(async (cls) => {
const component = await this[cls].from_pretrained(pretrained_model_name_or_path, options);
return [cls.replace(/_class$/, ''), component];
})
).then(Object.fromEntries)
]);
return new this(config, components);
}
}
| transformers.js/src/base/processing_utils.js/0 | {
"file_path": "transformers.js/src/base/processing_utils.js",
"repo_id": "transformers.js",
"token_count": 2375
} |
export * from './beit/image_processing_beit.js'
export * from './bit/image_processing_bit.js'
export * from './chinese_clip/image_processing_chinese_clip.js'
export * from './clip/image_processing_clip.js'
export * from './convnext/image_processing_convnext.js'
export * from './deit/image_processing_deit.js'
export * from './detr/image_processing_detr.js'
export * from './donut/image_processing_donut.js'
export * from './dpt/image_processing_dpt.js'
export * from './efficientnet/image_processing_efficientnet.js'
export * from './glpn/image_processing_glpn.js'
export * from './grounding_dino/image_processing_grounding_dino.js'
export * from './idefics3/image_processing_idefics3.js'
export * from './janus/image_processing_janus.js'
export * from './jina_clip/image_processing_jina_clip.js'
export * from './llava_onevision/image_processing_llava_onevision.js'
export * from './mask2former/image_processing_mask2former.js'
export * from './maskformer/image_processing_maskformer.js'
export * from './mobilenet_v1/image_processing_mobilenet_v1.js'
export * from './mobilenet_v2/image_processing_mobilenet_v2.js'
export * from './mobilenet_v3/image_processing_mobilenet_v3.js'
export * from './mobilenet_v4/image_processing_mobilenet_v4.js'
export * from './mobilevit/image_processing_mobilevit.js'
export * from './nougat/image_processing_nougat.js'
export * from './owlv2/image_processing_owlv2.js'
export * from './owlvit/image_processing_owlvit.js'
export * from './phi3_v/image_processing_phi3_v.js'
export * from './pvt/image_processing_pvt.js'
export * from './qwen2_vl/image_processing_qwen2_vl.js'
export * from './rt_detr/image_processing_rt_detr.js'
export * from './sam/image_processing_sam.js'
export * from './segformer/image_processing_segformer.js'
export * from './siglip/image_processing_siglip.js'
export * from './swin2sr/image_processing_swin2sr.js'
export * from './vit/image_processing_vit.js'
export * from './vitmatte/image_processing_vitmatte.js'
export * from './vitpose/image_processing_vitpose.js'
export * from './yolos/image_processing_yolos.js'
| transformers.js/src/models/image_processors.js/0 | {
"file_path": "transformers.js/src/models/image_processors.js",
"repo_id": "transformers.js",
"token_count": 781
} |
import {
ImageProcessor,
post_process_semantic_segmentation,
} from "../../base/image_processors_utils.js";
export class SapiensImageProcessor extends ImageProcessor {
/** @type {typeof post_process_semantic_segmentation} */
post_process_semantic_segmentation(...args) {
return post_process_semantic_segmentation(...args);
}
}
export class SapiensFeatureExtractor extends SapiensImageProcessor { }
| transformers.js/src/models/sapiens/image_processing_sapiens.js/0 | {
"file_path": "transformers.js/src/models/sapiens/image_processing_sapiens.js",
"repo_id": "transformers.js",
"token_count": 145
} |
import { GenerationConfig } from "../../generation/configuration_utils.js";
export class WhisperGenerationConfig extends GenerationConfig {
/**
* Whether to return the timestamps with the text. This enables the `WhisperTimestampsLogitsProcessor`.
* @type {boolean}
*/
return_timestamps = null;
/**
* Whether to return token-level timestamps
* with the text. This can be used with or without the `return_timestamps` option. To get word-level
* timestamps, use the tokenizer to group the tokens into words.
* @type {boolean}
*/
return_token_timestamps = null;
/**
* The number of audio frames available in this chunk. This is only used generating word-level timestamps.
* @type {number}
*/
num_frames = null;
/**
* Alignment heads to predict word-level timestamps. This is a list of [layer, head] pairs that
* select the cross-attention heads that are highly correlated to word-level timing.
* @type {[number, number][]}
*/
alignment_heads = null;
/**
* Task to use for generation, either "translate" or "transcribe".
* @type {string}
*/
task = null;
/**
* Language token to use for generation, can be either in the form of `<|en|>`, `en` or `english`.
* You can find all the possible language tokens in the `model.generation_config.lang_to_id` dictionary.
* @type {string}
*/
language = null;
/**
* The id of the `"<|notimestamps|>"` token.
* @type {number}
*/
no_timestamps_token_id = null;
/**
* Rank-1 list of token IDs created by passing text to [`~WhisperProcessor.get_prompt_ids`] that is
* provided as a prompt to each chunk. This can be used to provide or "prompt-engineer" a context for
* transcription, e.g. custom vocabularies or proper nouns to make it more likely to predict those words
* correctly. It cannot be used in conjunction with `decoder_start_token_id` as it overwrites this value.
* @type {number[]}
*/
prompt_ids = null;
/**
* Whether the model is multilingual or not.
* @type {boolean}
*/
is_multilingual = null;
/**
* (Optional) A mapping from language tokens to their corresponding IDs.
* Only required if the model is multilingual.
* @type {Record<string, number>|null}
*/
lang_to_id = null;
/**
* (Optional) A mapping from task tokens to their corresponding IDs.
* @type {Record<string, number>|null}
*/
task_to_id = null;
/**
* Used to set the maximum value of the initial timestamp. This is used to prevent the model from
* predicting timestamps that are too far in the future.
* @type {number}
*/
max_initial_timestamp_index = 1;
}
/**
* @typedef {import('../../generation/parameters.js').GenerationFunctionParameters & {generation_config: WhisperGenerationConfig} & WhisperGenerationConfig} WhisperGenerationFunctionParameters
*/
| transformers.js/src/models/whisper/generation_whisper.js/0 | {
"file_path": "transformers.js/src/models/whisper/generation_whisper.js",
"repo_id": "transformers.js",
"token_count": 1043
} |
/**
* @file Helper module for mathematical processing.
*
* These functions and classes are only used internally,
* meaning an end-user shouldn't need to access anything here.
*
* @module utils/maths
*/
/**
* @typedef {Int8Array | Uint8Array | Uint8ClampedArray | Int16Array | Uint16Array | Int32Array | Uint32Array | Float32Array | Float64Array} TypedArray
* @typedef {BigInt64Array | BigUint64Array} BigTypedArray
* @typedef {TypedArray | BigTypedArray} AnyTypedArray
*/
/**
* @param {TypedArray} input
*/
export function interpolate_data(input, [in_channels, in_height, in_width], [out_height, out_width], mode = 'bilinear', align_corners = false) {
// TODO use mode and align_corners
// Output image dimensions
const x_scale = out_width / in_width;
const y_scale = out_height / in_height;
// Output image
// @ts-ignore
const out_img = new input.constructor(out_height * out_width * in_channels);
// Pre-calculate strides
const inStride = in_height * in_width;
const outStride = out_height * out_width;
for (let i = 0; i < out_height; ++i) {
for (let j = 0; j < out_width; ++j) {
// Calculate output offset
const outOffset = i * out_width + j;
// Calculate input pixel coordinates
const x = (j + 0.5) / x_scale - 0.5;
const y = (i + 0.5) / y_scale - 0.5;
// Calculate the four nearest input pixels
// We also check if the input pixel coordinates are within the image bounds
let x1 = Math.floor(x);
let y1 = Math.floor(y);
const x2 = Math.min(x1 + 1, in_width - 1);
const y2 = Math.min(y1 + 1, in_height - 1);
x1 = Math.max(x1, 0);
y1 = Math.max(y1, 0);
// Calculate the fractional distances between the input pixel and the four nearest pixels
const s = x - x1;
const t = y - y1;
// Perform bilinear interpolation
const w1 = (1 - s) * (1 - t);
const w2 = s * (1 - t);
const w3 = (1 - s) * t;
const w4 = s * t;
// Calculate the four nearest input pixel indices
const yStride = y1 * in_width;
const xStride = y2 * in_width;
const idx1 = yStride + x1;
const idx2 = yStride + x2;
const idx3 = xStride + x1;
const idx4 = xStride + x2;
for (let k = 0; k < in_channels; ++k) {
// Calculate channel offset
const cOffset = k * inStride;
out_img[k * outStride + outOffset] =
w1 * input[cOffset + idx1] +
w2 * input[cOffset + idx2] +
w3 * input[cOffset + idx3] +
w4 * input[cOffset + idx4];
}
}
}
return out_img;
}
/**
* Helper method to permute a `AnyTypedArray` directly
* @template {AnyTypedArray} T
* @param {T} array
* @param {number[]} dims
* @param {number[]} axes
* @returns {[T, number[]]} The permuted array and the new shape.
*/
export function permute_data(array, dims, axes) {
// Calculate the new shape of the permuted array
// and the stride of the original array
const shape = new Array(axes.length);
const stride = new Array(axes.length);
for (let i = axes.length - 1, s = 1; i >= 0; --i) {
stride[i] = s;
shape[i] = dims[axes[i]];
s *= shape[i];
}
// Precompute inverse mapping of stride
const invStride = axes.map((_, i) => stride[axes.indexOf(i)]);
// Create the permuted array with the new shape
// @ts-ignore
const permutedData = new array.constructor(array.length);
// Permute the original array to the new array
for (let i = 0; i < array.length; ++i) {
let newIndex = 0;
for (let j = dims.length - 1, k = i; j >= 0; --j) {
newIndex += (k % dims[j]) * invStride[j];
k = Math.floor(k / dims[j]);
}
permutedData[newIndex] = array[i];
}
return [permutedData, shape];
}
/**
* Compute the softmax of an array of numbers.
* @template {TypedArray|number[]} T
* @param {T} arr The array of numbers to compute the softmax of.
* @returns {T} The softmax array.
*/
export function softmax(arr) {
// Compute the maximum value in the array
const maxVal = max(arr)[0];
// Compute the exponentials of the array values
const exps = arr.map(x => Math.exp(x - maxVal));
// Compute the sum of the exponentials
// @ts-ignore
const sumExps = exps.reduce((acc, val) => acc + val, 0);
// Compute the softmax values
const softmaxArr = exps.map(x => x / sumExps);
return /** @type {T} */(softmaxArr);
}
/**
* Calculates the logarithm of the softmax function for the input array.
* @template {TypedArray|number[]} T
* @param {T} arr The input array to calculate the log_softmax function for.
* @returns {T} The resulting log_softmax array.
*/
export function log_softmax(arr) {
// Compute the maximum value in the array
const maxVal = max(arr)[0];
// Compute the sum of the exponentials
let sumExps = 0;
for(let i = 0; i < arr.length; ++i) {
sumExps += Math.exp(arr[i] - maxVal);
}
// Compute the log of the sum
const logSum = Math.log(sumExps);
// Compute the softmax values
const logSoftmaxArr = arr.map(x => x - maxVal - logSum);
return /** @type {T} */(logSoftmaxArr);
}
/**
* Calculates the dot product of two arrays.
* @param {number[]} arr1 The first array.
* @param {number[]} arr2 The second array.
* @returns {number} The dot product of arr1 and arr2.
*/
export function dot(arr1, arr2) {
let result = 0;
for (let i = 0; i < arr1.length; ++i) {
result += arr1[i] * arr2[i];
}
return result;
}
/**
* Computes the cosine similarity between two arrays.
*
* @param {number[]} arr1 The first array.
* @param {number[]} arr2 The second array.
* @returns {number} The cosine similarity between the two arrays.
*/
export function cos_sim(arr1, arr2) {
// Calculate dot product of the two arrays
const dotProduct = dot(arr1, arr2);
// Calculate the magnitude of the first array
const magnitudeA = magnitude(arr1);
// Calculate the magnitude of the second array
const magnitudeB = magnitude(arr2);
// Calculate the cosine similarity
const cosineSimilarity = dotProduct / (magnitudeA * magnitudeB);
return cosineSimilarity;
}
/**
* Calculates the magnitude of a given array.
* @param {number[]} arr The array to calculate the magnitude of.
* @returns {number} The magnitude of the array.
*/
export function magnitude(arr) {
return Math.sqrt(arr.reduce((acc, val) => acc + val * val, 0));
}
/**
* Returns the value and index of the minimum element in an array.
* @template {number[]|bigint[]|AnyTypedArray} T
* @param {T} arr array of numbers.
* @returns {T extends bigint[]|BigTypedArray ? [bigint, number] : [number, number]} the value and index of the minimum element, of the form: [valueOfMin, indexOfMin]
* @throws {Error} If array is empty.
*/
export function min(arr) {
if (arr.length === 0) throw Error('Array must not be empty');
let min = arr[0];
let indexOfMin = 0;
for (let i = 1; i < arr.length; ++i) {
if (arr[i] < min) {
min = arr[i];
indexOfMin = i;
}
}
return /** @type {T extends bigint[]|BigTypedArray ? [bigint, number] : [number, number]} */([min, indexOfMin]);
}
/**
* Returns the value and index of the maximum element in an array.
* @template {number[]|bigint[]|AnyTypedArray} T
* @param {T} arr array of numbers.
* @returns {T extends bigint[]|BigTypedArray ? [bigint, number] : [number, number]} the value and index of the maximum element, of the form: [valueOfMax, indexOfMax]
* @throws {Error} If array is empty.
*/
export function max(arr) {
if (arr.length === 0) throw Error('Array must not be empty');
let max = arr[0];
let indexOfMax = 0;
for (let i = 1; i < arr.length; ++i) {
if (arr[i] > max) {
max = arr[i];
indexOfMax = i;
}
}
return /** @type {T extends bigint[]|BigTypedArray ? [bigint, number] : [number, number]} */([max, indexOfMax]);
}
function isPowerOfTwo(number) {
// Check if the number is greater than 0 and has only one bit set to 1
return (number > 0) && ((number & (number - 1)) === 0);
}
/**
* Implementation of Radix-4 FFT.
*
* P2FFT class provides functionality for performing Fast Fourier Transform on arrays
* which are a power of two in length.
* Code adapted from https://www.npmjs.com/package/fft.js
*/
class P2FFT {
/**
* @param {number} size The size of the input array. Must be a power of two larger than 1.
* @throws {Error} FFT size must be a power of two larger than 1.
*/
constructor(size) {
this.size = size | 0; // convert to a 32-bit signed integer
if (this.size <= 1 || !isPowerOfTwo(this.size))
throw new Error('FFT size must be a power of two larger than 1');
this._csize = size << 1;
this.table = new Float64Array(this.size * 2);
for (let i = 0; i < this.table.length; i += 2) {
const angle = Math.PI * i / this.size;
this.table[i] = Math.cos(angle);
this.table[i + 1] = -Math.sin(angle);
}
// Find size's power of two
let power = 0;
for (let t = 1; this.size > t; t <<= 1)
++power;
// Calculate initial step's width:
// * If we are full radix-4, it is 2x smaller to give inital len=8
// * Otherwise it is the same as `power` to give len=4
this._width = power % 2 === 0 ? power - 1 : power;
// Pre-compute bit-reversal patterns
this._bitrev = new Int32Array(1 << this._width);
for (let j = 0; j < this._bitrev.length; ++j) {
this._bitrev[j] = 0;
for (let shift = 0; shift < this._width; shift += 2) {
const revShift = this._width - shift - 2;
this._bitrev[j] |= ((j >>> shift) & 3) << revShift;
}
}
}
/**
* Create a complex number array with size `2 * size`
*
* @returns {Float64Array} A complex number array with size `2 * size`
*/
createComplexArray() {
return new Float64Array(this._csize);
}
/**
* Converts a complex number representation stored in a Float64Array to an array of real numbers.
*
* @param {Float64Array} complex The complex number representation to be converted.
* @param {number[]} [storage] An optional array to store the result in.
* @returns {number[]} An array of real numbers representing the input complex number representation.
*/
fromComplexArray(complex, storage) {
const res = storage || new Array(complex.length >>> 1);
for (let i = 0; i < complex.length; i += 2)
res[i >>> 1] = complex[i];
return res;
}
/**
* Convert a real-valued input array to a complex-valued output array.
* @param {Float64Array} input The real-valued input array.
* @param {Float64Array} [storage] Optional buffer to store the output array.
* @returns {Float64Array} The complex-valued output array.
*/
toComplexArray(input, storage) {
const res = storage || this.createComplexArray();
for (let i = 0; i < res.length; i += 2) {
res[i] = input[i >>> 1];
res[i + 1] = 0;
}
return res;
}
/**
* Performs a Fast Fourier Transform (FFT) on the given input data and stores the result in the output buffer.
*
* @param {Float64Array} out The output buffer to store the result.
* @param {Float64Array} data The input data to transform.
*
* @throws {Error} Input and output buffers must be different.
*
* @returns {void}
*/
transform(out, data) {
if (out === data)
throw new Error('Input and output buffers must be different');
this._transform4(out, data, 1 /* DONE */);
}
/**
* Performs a real-valued forward FFT on the given input buffer and stores the result in the given output buffer.
* The input buffer must contain real values only, while the output buffer will contain complex values. The input and
* output buffers must be different.
*
* @param {Float64Array} out The output buffer.
* @param {Float64Array} data The input buffer containing real values.
*
* @throws {Error} If the input and output buffers are the same.
*/
realTransform(out, data) {
if (out === data)
throw new Error('Input and output buffers must be different');
this._realTransform4(out, data, 1 /* DONE */);
}
/**
* Performs an inverse FFT transformation on the given `data` array, and stores the result in `out`.
* The `out` array must be a different buffer than the `data` array. The `out` array will contain the
* result of the transformation. The `data` array will not be modified.
*
* @param {Float64Array} out The output buffer for the transformed data.
* @param {Float64Array} data The input data to transform.
* @throws {Error} If `out` and `data` refer to the same buffer.
* @returns {void}
*/
inverseTransform(out, data) {
if (out === data)
throw new Error('Input and output buffers must be different');
this._transform4(out, data, -1 /* DONE */);
for (let i = 0; i < out.length; ++i)
out[i] /= this.size;
}
/**
* Performs a radix-4 implementation of a discrete Fourier transform on a given set of data.
*
* @param {Float64Array} out The output buffer for the transformed data.
* @param {Float64Array} data The input buffer of data to be transformed.
* @param {number} inv A scaling factor to apply to the transform.
* @returns {void}
*/
_transform4(out, data, inv) {
// radix-4 implementation
const size = this._csize;
// Initial step (permute and transform)
const width = this._width;
let step = 1 << width;
let len = (size / step) << 1;
let outOff;
let t;
const bitrev = this._bitrev;
if (len === 4) {
for (outOff = 0, t = 0; outOff < size; outOff += len, ++t) {
const off = bitrev[t];
this._singleTransform2(data, out, outOff, off, step);
}
} else {
// len === 8
for (outOff = 0, t = 0; outOff < size; outOff += len, ++t) {
const off = bitrev[t];
this._singleTransform4(data, out, outOff, off, step, inv);
}
}
// Loop through steps in decreasing order
const table = this.table;
for (step >>= 2; step >= 2; step >>= 2) {
len = (size / step) << 1;
const quarterLen = len >>> 2;
// Loop through offsets in the data
for (outOff = 0; outOff < size; outOff += len) {
// Full case
const limit = outOff + quarterLen - 1;
for (let i = outOff, k = 0; i < limit; i += 2, k += step) {
const A = i;
const B = A + quarterLen;
const C = B + quarterLen;
const D = C + quarterLen;
// Original values
const Ar = out[A];
const Ai = out[A + 1];
const Br = out[B];
const Bi = out[B + 1];
const Cr = out[C];
const Ci = out[C + 1];
const Dr = out[D];
const Di = out[D + 1];
const tableBr = table[k];
const tableBi = inv * table[k + 1];
const MBr = Br * tableBr - Bi * tableBi;
const MBi = Br * tableBi + Bi * tableBr;
const tableCr = table[2 * k];
const tableCi = inv * table[2 * k + 1];
const MCr = Cr * tableCr - Ci * tableCi;
const MCi = Cr * tableCi + Ci * tableCr;
const tableDr = table[3 * k];
const tableDi = inv * table[3 * k + 1];
const MDr = Dr * tableDr - Di * tableDi;
const MDi = Dr * tableDi + Di * tableDr;
// Pre-Final values
const T0r = Ar + MCr;
const T0i = Ai + MCi;
const T1r = Ar - MCr;
const T1i = Ai - MCi;
const T2r = MBr + MDr;
const T2i = MBi + MDi;
const T3r = inv * (MBr - MDr);
const T3i = inv * (MBi - MDi);
// Final values
out[A] = T0r + T2r;
out[A + 1] = T0i + T2i;
out[B] = T1r + T3i;
out[B + 1] = T1i - T3r;
out[C] = T0r - T2r;
out[C + 1] = T0i - T2i;
out[D] = T1r - T3i;
out[D + 1] = T1i + T3r;
}
}
}
}
/**
* Performs a radix-2 implementation of a discrete Fourier transform on a given set of data.
*
* @param {Float64Array} data The input buffer of data to be transformed.
* @param {Float64Array} out The output buffer for the transformed data.
* @param {number} outOff The offset at which to write the output data.
* @param {number} off The offset at which to begin reading the input data.
* @param {number} step The step size for indexing the input data.
* @returns {void}
*/
_singleTransform2(data, out, outOff, off, step) {
// radix-2 implementation
// NOTE: Only called for len=4
const evenR = data[off];
const evenI = data[off + 1];
const oddR = data[off + step];
const oddI = data[off + step + 1];
out[outOff] = evenR + oddR;
out[outOff + 1] = evenI + oddI;
out[outOff + 2] = evenR - oddR;
out[outOff + 3] = evenI - oddI;
}
/**
* Performs radix-4 transformation on input data of length 8
*
* @param {Float64Array} data Input data array of length 8
* @param {Float64Array} out Output data array of length 8
* @param {number} outOff Index of output array to start writing from
* @param {number} off Index of input array to start reading from
* @param {number} step Step size between elements in input array
* @param {number} inv Scaling factor for inverse transform
*
* @returns {void}
*/
_singleTransform4(data, out, outOff, off, step, inv) {
// radix-4
// NOTE: Only called for len=8
const step2 = step * 2;
const step3 = step * 3;
// Original values
const Ar = data[off];
const Ai = data[off + 1];
const Br = data[off + step];
const Bi = data[off + step + 1];
const Cr = data[off + step2];
const Ci = data[off + step2 + 1];
const Dr = data[off + step3];
const Di = data[off + step3 + 1];
// Pre-Final values
const T0r = Ar + Cr;
const T0i = Ai + Ci;
const T1r = Ar - Cr;
const T1i = Ai - Ci;
const T2r = Br + Dr;
const T2i = Bi + Di;
const T3r = inv * (Br - Dr);
const T3i = inv * (Bi - Di);
// Final values
out[outOff] = T0r + T2r;
out[outOff + 1] = T0i + T2i;
out[outOff + 2] = T1r + T3i;
out[outOff + 3] = T1i - T3r;
out[outOff + 4] = T0r - T2r;
out[outOff + 5] = T0i - T2i;
out[outOff + 6] = T1r - T3i;
out[outOff + 7] = T1i + T3r;
}
/**
* Real input radix-4 implementation
* @param {Float64Array} out Output array for the transformed data
* @param {Float64Array} data Input array of real data to be transformed
* @param {number} inv The scale factor used to normalize the inverse transform
*/
_realTransform4(out, data, inv) {
// Real input radix-4 implementation
const size = this._csize;
// Initial step (permute and transform)
const width = this._width;
let step = 1 << width;
let len = (size / step) << 1;
let outOff;
let t;
const bitrev = this._bitrev;
if (len === 4) {
for (outOff = 0, t = 0; outOff < size; outOff += len, ++t) {
const off = bitrev[t];
this._singleRealTransform2(data, out, outOff, off >>> 1, step >>> 1);
}
} else {
// len === 8
for (outOff = 0, t = 0; outOff < size; outOff += len, ++t) {
const off = bitrev[t];
this._singleRealTransform4(data, out, outOff, off >>> 1, step >>> 1, inv);
}
}
// Loop through steps in decreasing order
const table = this.table;
for (step >>= 2; step >= 2; step >>= 2) {
len = (size / step) << 1;
const halfLen = len >>> 1;
const quarterLen = halfLen >>> 1;
const hquarterLen = quarterLen >>> 1;
// Loop through offsets in the data
for (outOff = 0; outOff < size; outOff += len) {
for (let i = 0, k = 0; i <= hquarterLen; i += 2, k += step) {
const A = outOff + i;
const B = A + quarterLen;
const C = B + quarterLen;
const D = C + quarterLen;
// Original values
const Ar = out[A];
const Ai = out[A + 1];
const Br = out[B];
const Bi = out[B + 1];
const Cr = out[C];
const Ci = out[C + 1];
const Dr = out[D];
const Di = out[D + 1];
// Middle values
const MAr = Ar;
const MAi = Ai;
const tableBr = table[k];
const tableBi = inv * table[k + 1];
const MBr = Br * tableBr - Bi * tableBi;
const MBi = Br * tableBi + Bi * tableBr;
const tableCr = table[2 * k];
const tableCi = inv * table[2 * k + 1];
const MCr = Cr * tableCr - Ci * tableCi;
const MCi = Cr * tableCi + Ci * tableCr;
const tableDr = table[3 * k];
const tableDi = inv * table[3 * k + 1];
const MDr = Dr * tableDr - Di * tableDi;
const MDi = Dr * tableDi + Di * tableDr;
// Pre-Final values
const T0r = MAr + MCr;
const T0i = MAi + MCi;
const T1r = MAr - MCr;
const T1i = MAi - MCi;
const T2r = MBr + MDr;
const T2i = MBi + MDi;
const T3r = inv * (MBr - MDr);
const T3i = inv * (MBi - MDi);
// Final values
out[A] = T0r + T2r;
out[A + 1] = T0i + T2i;
out[B] = T1r + T3i;
out[B + 1] = T1i - T3r;
// Output final middle point
if (i === 0) {
out[C] = T0r - T2r;
out[C + 1] = T0i - T2i;
continue;
}
// Do not overwrite ourselves
if (i === hquarterLen)
continue;
const SA = outOff + quarterLen - i;
const SB = outOff + halfLen - i;
out[SA] = T1r - inv * T3i;
out[SA + 1] = -T1i - inv * T3r;
out[SB] = T0r - inv * T2r;
out[SB + 1] = -T0i + inv * T2i;
}
}
}
// Complete the spectrum by adding its mirrored negative frequency components.
const half = size >>> 1;
for (let i = 2; i < half; i += 2) {
out[size - i] = out[i];
out[size - i + 1] = -out[i + 1];
}
}
/**
* Performs a single real input radix-2 transformation on the provided data
*
* @param {Float64Array} data The input data array
* @param {Float64Array} out The output data array
* @param {number} outOff The output offset
* @param {number} off The input offset
* @param {number} step The step
*
* @returns {void}
*/
_singleRealTransform2(data, out, outOff, off, step) {
// radix-2 implementation
// NOTE: Only called for len=4
const evenR = data[off];
const oddR = data[off + step];
out[outOff] = evenR + oddR;
out[outOff + 1] = 0;
out[outOff + 2] = evenR - oddR;
out[outOff + 3] = 0;
}
/**
* Computes a single real-valued transform using radix-4 algorithm.
* This method is only called for len=8.
*
* @param {Float64Array} data The input data array.
* @param {Float64Array} out The output data array.
* @param {number} outOff The offset into the output array.
* @param {number} off The offset into the input array.
* @param {number} step The step size for the input array.
* @param {number} inv The value of inverse.
*/
_singleRealTransform4(data, out, outOff, off, step, inv) {
// radix-4
// NOTE: Only called for len=8
const step2 = step * 2;
const step3 = step * 3;
// Original values
const Ar = data[off];
const Br = data[off + step];
const Cr = data[off + step2];
const Dr = data[off + step3];
// Pre-Final values
const T0r = Ar + Cr;
const T1r = Ar - Cr;
const T2r = Br + Dr;
const T3r = inv * (Br - Dr);
// Final values
out[outOff] = T0r + T2r;
out[outOff + 1] = 0;
out[outOff + 2] = T1r;
out[outOff + 3] = -T3r;
out[outOff + 4] = T0r - T2r;
out[outOff + 5] = 0;
out[outOff + 6] = T1r;
out[outOff + 7] = T3r;
}
}
/**
* NP2FFT class provides functionality for performing Fast Fourier Transform on arrays
* which are not a power of two in length. In such cases, the chirp-z transform is used.
*
* For more information, see: https://math.stackexchange.com/questions/77118/non-power-of-2-ffts/77156#77156
*/
class NP2FFT {
/**
* Constructs a new NP2FFT object.
* @param {number} fft_length The length of the FFT
*/
constructor(fft_length) {
// Helper variables
const a = 2 * (fft_length - 1);
const b = 2 * (2 * fft_length - 1);
const nextP2 = 2 ** (Math.ceil(Math.log2(b)))
this.bufferSize = nextP2;
this._a = a;
// Define buffers
// Compute chirp for transform
const chirp = new Float64Array(b);
const ichirp = new Float64Array(nextP2);
this._chirpBuffer = new Float64Array(nextP2);
this._buffer1 = new Float64Array(nextP2);
this._buffer2 = new Float64Array(nextP2);
this._outBuffer1 = new Float64Array(nextP2);
this._outBuffer2 = new Float64Array(nextP2);
// Compute complex exponentiation
const theta = -2 * Math.PI / fft_length;
const baseR = Math.cos(theta);
const baseI = Math.sin(theta);
// Precompute helper for chirp-z transform
for (let i = 0; i < b >> 1; ++i) {
// Compute complex power:
const e = (i + 1 - fft_length) ** 2 / 2.0;
// Compute the modulus and argument of the result
const result_mod = Math.sqrt(baseR ** 2 + baseI ** 2) ** e;
const result_arg = e * Math.atan2(baseI, baseR);
// Convert the result back to rectangular form
// and assign to chirp and ichirp
const i2 = 2 * i;
chirp[i2] = result_mod * Math.cos(result_arg);
chirp[i2 + 1] = result_mod * Math.sin(result_arg);
// conjugate
ichirp[i2] = chirp[i2];
ichirp[i2 + 1] = - chirp[i2 + 1];
}
this._slicedChirpBuffer = chirp.subarray(a, b);
// create object to perform Fast Fourier Transforms
// with `nextP2` complex numbers
this._f = new P2FFT(nextP2 >> 1);
this._f.transform(this._chirpBuffer, ichirp);
}
_transform(output, input, real) {
const ib1 = this._buffer1;
const ib2 = this._buffer2;
const ob2 = this._outBuffer1;
const ob3 = this._outBuffer2;
const cb = this._chirpBuffer;
const sb = this._slicedChirpBuffer;
const a = this._a;
if (real) {
// Real multiplication
for (let j = 0; j < sb.length; j += 2) {
const j2 = j + 1
const j3 = j >> 1;
const a_real = input[j3];
ib1[j] = a_real * sb[j];
ib1[j2] = a_real * sb[j2];
}
} else {
// Complex multiplication
for (let j = 0; j < sb.length; j += 2) {
const j2 = j + 1
ib1[j] = input[j] * sb[j] - input[j2] * sb[j2];
ib1[j2] = input[j] * sb[j2] + input[j2] * sb[j];
}
}
this._f.transform(ob2, ib1);
for (let j = 0; j < cb.length; j += 2) {
const j2 = j + 1;
ib2[j] = ob2[j] * cb[j] - ob2[j2] * cb[j2];
ib2[j2] = ob2[j] * cb[j2] + ob2[j2] * cb[j];
}
this._f.inverseTransform(ob3, ib2);
for (let j = 0; j < ob3.length; j += 2) {
const a_real = ob3[j + a];
const a_imag = ob3[j + a + 1];
const b_real = sb[j];
const b_imag = sb[j + 1];
output[j] = a_real * b_real - a_imag * b_imag;
output[j + 1] = a_real * b_imag + a_imag * b_real;
}
}
transform(output, input) {
this._transform(output, input, false);
}
realTransform(output, input) {
this._transform(output, input, true);
}
}
export class FFT {
constructor(fft_length) {
this.fft_length = fft_length;
this.isPowerOfTwo = isPowerOfTwo(fft_length);
if (this.isPowerOfTwo) {
this.fft = new P2FFT(fft_length);
this.outputBufferSize = 2 * fft_length;
} else {
this.fft = new NP2FFT(fft_length);
this.outputBufferSize = this.fft.bufferSize;
}
}
realTransform(out, input) {
this.fft.realTransform(out, input);
}
transform(out, input) {
this.fft.transform(out, input);
}
}
/**
* Performs median filter on the provided data. Padding is done by mirroring the data.
* @param {AnyTypedArray} data The input array
* @param {number} windowSize The window size
*/
export function medianFilter(data, windowSize) {
if (windowSize % 2 === 0 || windowSize <= 0) {
throw new Error('Window size must be a positive odd number');
}
// @ts-ignore
const outputArray = new data.constructor(data.length);
// @ts-ignore
const buffer = new data.constructor(windowSize); // Reusable array for storing values
const halfWindowSize = Math.floor(windowSize / 2);
for (let i = 0; i < data.length; ++i) {
let valuesIndex = 0;
for (let j = -halfWindowSize; j <= halfWindowSize; ++j) {
let index = i + j;
if (index < 0) {
index = Math.abs(index);
} else if (index >= data.length) {
index = 2 * (data.length - 1) - index;
}
buffer[valuesIndex++] = data[index];
}
buffer.sort();
outputArray[i] = buffer[halfWindowSize];
}
return outputArray;
}
/**
* Helper function to round a number to a given number of decimals
* @param {number} num The number to round
* @param {number} decimals The number of decimals
* @returns {number} The rounded number
*/
export function round(num, decimals) {
const pow = Math.pow(10, decimals);
return Math.round(num * pow) / pow;
}
/**
* Helper function to round a number to the nearest integer, with ties rounded to the nearest even number.
* Also known as "bankers' rounding". This is the default rounding mode in python. For example:
* 1.5 rounds to 2 and 2.5 rounds to 2.
*
* @param {number} x The number to round
* @returns {number} The rounded number
*/
export function bankers_round(x) {
const r = Math.round(x);
const br = Math.abs(x) % 1 === 0.5 ? (r % 2 === 0 ? r : r - 1) : r;
return br;
}
/**
* Measures similarity between two temporal sequences (e.g., input audio and output tokens
* to generate token-level timestamps).
* @param {number[][]} matrix
* @returns {number[][]}
*/
export function dynamic_time_warping(matrix) {
const output_length = matrix.length;
const input_length = matrix[0].length;
const outputShape = [output_length + 1, input_length + 1];
const cost = Array.from(
{ length: outputShape[0] },
() => Array(outputShape[1]).fill(Infinity)
);
cost[0][0] = 0;
const trace = Array.from(
{ length: outputShape[0] },
() => Array(outputShape[1]).fill(-1)
);
for (let j = 1; j < outputShape[1]; ++j) {
for (let i = 1; i < outputShape[0]; ++i) {
const c0 = cost[i - 1][j - 1];
const c1 = cost[i - 1][j];
const c2 = cost[i][j - 1];
let c, t;
if (c0 < c1 && c0 < c2) {
c = c0;
t = 0;
} else if (c1 < c0 && c1 < c2) {
c = c1;
t = 1;
} else {
c = c2;
t = 2;
}
cost[i][j] = matrix[i - 1][j - 1] + c;
trace[i][j] = t;
}
}
for (let i = 0; i < outputShape[1]; ++i) { // trace[0, :] = 2
trace[0][i] = 2;
}
for (let i = 0; i < outputShape[0]; ++i) { // trace[:, 0] = 1
trace[i][0] = 1;
}
// backtrace
let i = output_length;
let j = input_length;
let text_indices = [];
let time_indices = [];
while (i > 0 || j > 0) {
text_indices.push(i - 1);
time_indices.push(j - 1);
switch (trace[i][j]) {
case 0:
--i; --j;
break;
case 1:
--i;
break;
case 2:
--j;
break;
default:
throw new Error(
`Internal error in dynamic time warping. Unexpected trace[${i}, ${j}]. Please file a bug report.`
)
}
}
text_indices.reverse();
time_indices.reverse();
return [text_indices, time_indices];
}
| transformers.js/src/utils/maths.js/0 | {
"file_path": "transformers.js/src/utils/maths.js",
"repo_id": "transformers.js",
"token_count": 16615
} |
import { BloomTokenizer, BloomForCausalLM } from "../../../src/transformers.js";
import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js";
export default () => {
describe("BloomForCausalLM", () => {
const model_id = "hf-internal-testing/tiny-random-BloomForCausalLM";
/** @type {BloomForCausalLM} */
let model;
/** @type {BloomTokenizer} */
let tokenizer;
beforeAll(async () => {
model = await BloomForCausalLM.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS);
tokenizer = await BloomTokenizer.from_pretrained(model_id);
}, MAX_MODEL_LOAD_TIME);
it(
"batch_size=1",
async () => {
const inputs = tokenizer("hello");
const outputs = await model.generate({
...inputs,
max_length: 10,
});
expect(outputs.tolist()).toEqual([[198n, 803n, 82n, 82n, 82n, 82n, 82n, 82n, 82n, 82n]]);
},
MAX_TEST_EXECUTION_TIME,
);
it(
"batch_size>1",
async () => {
const inputs = tokenizer(["hello", "hello world"], { padding: true });
const outputs = await model.generate({
...inputs,
max_length: 10,
});
expect(outputs.tolist()).toEqual([
[3n, 3n, 198n, 803n, 82n, 82n, 82n, 82n, 82n, 82n],
[198n, 803n, 82n, 209n, 753n, 753n, 753n, 753n, 753n, 753n],
]);
},
MAX_TEST_EXECUTION_TIME,
);
afterAll(async () => {
await model?.dispose();
}, MAX_MODEL_DISPOSE_TIME);
});
};
| transformers.js/tests/models/bloom/test_modeling_bloom.js/0 | {
"file_path": "transformers.js/tests/models/bloom/test_modeling_bloom.js",
"repo_id": "transformers.js",
"token_count": 742
} |
import { EsmTokenizer } from "../../../src/tokenizers.js";
import { BASE_TEST_STRINGS, ESM_TEST_STRINGS } from "../test_strings.js";
export const TOKENIZER_CLASS = EsmTokenizer;
export const TEST_CONFIG = {
"Xenova/nucleotide-transformer-500m-human-ref": {
SIMPLE: {
text: BASE_TEST_STRINGS.SIMPLE,
// "tokens": ["How", "are", "you", "doing?"],
ids: [3, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk>",
},
SIMPLE_WITH_PUNCTUATION: {
text: BASE_TEST_STRINGS.SIMPLE_WITH_PUNCTUATION,
// "tokens": ["You", "should've", "done", "this"],
ids: [3, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk>",
},
NUMBERS: {
text: BASE_TEST_STRINGS.NUMBERS,
// "tokens": ["0123456789", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "100", "1000"],
ids: [3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk>",
},
TEXT_WITH_NUMBERS: {
text: BASE_TEST_STRINGS.TEXT_WITH_NUMBERS,
// "tokens": ["T", "he", "company", "was", "founded", "in", "2016."],
ids: [3, 4101, 0, 0, 0, 0, 0, 0],
decoded: "<cls> T <unk> <unk> <unk> <unk> <unk> <unk>",
},
PUNCTUATION: {
text: BASE_TEST_STRINGS.PUNCTUATION,
// "tokens": ["A", "'ll", "!!to?'d''d", "of,", "can't."],
ids: [3, 4100, 0, 0, 0, 0],
decoded: "<cls> A <unk> <unk> <unk> <unk>",
},
PYTHON_CODE: {
text: BASE_TEST_STRINGS.PYTHON_CODE,
// "tokens": ["def", "main():", "pass"],
ids: [3, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk>",
},
JAVASCRIPT_CODE: {
text: BASE_TEST_STRINGS.JAVASCRIPT_CODE,
// "tokens": ["let", "a", "=", "obj.toString();", "toString();"],
ids: [3, 0, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk>",
},
NEWLINES: {
text: BASE_TEST_STRINGS.NEWLINES,
// "tokens": ["T", "his", "is", "a", "test."],
ids: [3, 4101, 0, 0, 0, 0],
decoded: "<cls> T <unk> <unk> <unk> <unk>",
},
BASIC: {
text: BASE_TEST_STRINGS.BASIC,
// "tokens": ["U", "N", "want\u00e9d,running"],
ids: [3, 0, 4104, 0],
decoded: "<cls> <unk> N <unk>",
},
CONTROL_TOKENS: {
text: BASE_TEST_STRINGS.CONTROL_TOKENS,
// "tokens": ["1\u00002\ufffd3"],
ids: [3, 0],
decoded: "<cls> <unk>",
},
HELLO_WORLD_TITLECASE: {
text: BASE_TEST_STRINGS.HELLO_WORLD_TITLECASE,
// "tokens": ["Hello", "World"],
ids: [3, 0, 0],
decoded: "<cls> <unk> <unk>",
},
HELLO_WORLD_LOWERCASE: {
text: BASE_TEST_STRINGS.HELLO_WORLD_LOWERCASE,
// "tokens": ["hello", "world"],
ids: [3, 0, 0],
decoded: "<cls> <unk> <unk>",
},
CHINESE_ONLY: {
text: BASE_TEST_STRINGS.CHINESE_ONLY,
// "tokens": ["\u751f\u6d3b\u7684\u771f\u8c1b\u662f"],
ids: [3, 0],
decoded: "<cls> <unk>",
},
LEADING_SPACE: {
text: BASE_TEST_STRINGS.LEADING_SPACE,
// "tokens": ["leading", "space"],
ids: [3, 0, 0],
decoded: "<cls> <unk> <unk>",
},
TRAILING_SPACE: {
text: BASE_TEST_STRINGS.TRAILING_SPACE,
// "tokens": ["trailing", "space"],
ids: [3, 0, 0],
decoded: "<cls> <unk> <unk>",
},
DOUBLE_SPACE: {
text: BASE_TEST_STRINGS.DOUBLE_SPACE,
// "tokens": ["Hi", "Hello"],
ids: [3, 0, 0],
decoded: "<cls> <unk> <unk>",
},
CURRENCY: {
text: BASE_TEST_STRINGS.CURRENCY,
// "tokens": ["test", "$1", "R2", "#3", "\u20ac4", "\u00a35", "\u00a56", "\u20a37", "\u20b98", "\u20b19", "test"],
ids: [3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk>",
},
CURRENCY_WITH_DECIMALS: {
text: BASE_TEST_STRINGS.CURRENCY_WITH_DECIMALS,
// "tokens": ["I", "bought", "an", "apple", "for", "$1.00", "at", "the", "store."],
ids: [3, 0, 0, 0, 0, 0, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk>",
},
ELLIPSIS: {
text: BASE_TEST_STRINGS.ELLIPSIS,
// "tokens": ["you\u2026"],
ids: [3, 0],
decoded: "<cls> <unk>",
},
TEXT_WITH_ESCAPE_CHARACTERS: {
text: BASE_TEST_STRINGS.TEXT_WITH_ESCAPE_CHARACTERS,
// "tokens": ["you\u2026"],
ids: [3, 0],
decoded: "<cls> <unk>",
},
TEXT_WITH_ESCAPE_CHARACTERS_2: {
text: BASE_TEST_STRINGS.TEXT_WITH_ESCAPE_CHARACTERS_2,
// "tokens": ["you\u2026", "you\u2026"],
ids: [3, 0, 0],
decoded: "<cls> <unk> <unk>",
},
TILDE_NORMALIZATION: {
text: BASE_TEST_STRINGS.TILDE_NORMALIZATION,
// "tokens": ["weird", "\uff5e", "edge", "\uff5e", "case"],
ids: [3, 0, 0, 0, 0, 0],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk>",
},
SPIECE_UNDERSCORE: {
text: BASE_TEST_STRINGS.SPIECE_UNDERSCORE,
// "tokens": ["\u2581", "T", "his", "\u2581is", "\u2581a", "\u2581test", "\u2581."],
ids: [3, 0, 4101, 0, 0, 0, 0, 0],
decoded: "<cls> <unk> T <unk> <unk> <unk> <unk> <unk>",
},
SPECIAL_TOKENS: {
text: ESM_TEST_STRINGS.SPECIAL_TOKENS,
tokens: ["<unk>", "<pad>", "<mask>", "<cls>", "<eos>", "<bos>"],
ids: [3, 0, 1, 2, 3, 4105, 4106],
decoded: "<cls> <unk> <pad> <mask> <cls> <eos> <bos>",
},
PROTEIN_SEQUENCES_1: {
text: ESM_TEST_STRINGS.PROTEIN_SEQUENCES_1,
tokens: ["ATTCCG", "ATTCCG", "ATTCCG"],
ids: [3, 367, 367, 367],
decoded: "<cls> ATTCCG ATTCCG ATTCCG",
},
PROTEIN_SEQUENCES_2: {
text: ESM_TEST_STRINGS.PROTEIN_SEQUENCES_2,
tokens: ["ATTTCT", "CTCTCT", "CTCTGA", "GATCGA", "TCGATC", "G", "A", "T"],
ids: [3, 349, 2461, 2464, 3184, 1738, 4103, 4100, 4101],
decoded: "<cls> ATTTCT CTCTCT CTCTGA GATCGA TCGATC G A T",
},
},
"Xenova/esm2_t12_35M_UR50D": {
SIMPLE: {
text: BASE_TEST_STRINGS.SIMPLE,
// "tokens": ["H", "ow", "are", "you", "doing?"],
ids: [0, 21, 3, 3, 3, 3, 2],
decoded: "<cls> H <unk> <unk> <unk> <unk> <eos>",
},
SIMPLE_WITH_PUNCTUATION: {
text: BASE_TEST_STRINGS.SIMPLE_WITH_PUNCTUATION,
// "tokens": ["Y", "ou", "should've", "done", "this"],
ids: [0, 19, 3, 3, 3, 3, 2],
decoded: "<cls> Y <unk> <unk> <unk> <unk> <eos>",
},
NUMBERS: {
text: BASE_TEST_STRINGS.NUMBERS,
// "tokens": ["0123456789", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "100", "1000"],
ids: [0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <eos>",
},
TEXT_WITH_NUMBERS: {
text: BASE_TEST_STRINGS.TEXT_WITH_NUMBERS,
// "tokens": ["T", "he", "company", "was", "founded", "in", "2016", "."],
ids: [0, 11, 3, 3, 3, 3, 3, 3, 29, 2],
decoded: "<cls> T <unk> <unk> <unk> <unk> <unk> <unk>. <eos>",
},
PUNCTUATION: {
text: BASE_TEST_STRINGS.PUNCTUATION,
// "tokens": ["A", "'ll", "!!to?'d''d", "of,", "can't", "."],
ids: [0, 5, 3, 3, 3, 3, 29, 2],
decoded: "<cls> A <unk> <unk> <unk> <unk>. <eos>",
},
PYTHON_CODE: {
text: BASE_TEST_STRINGS.PYTHON_CODE,
// "tokens": ["def", "main():", "pass"],
ids: [0, 3, 3, 3, 2],
decoded: "<cls> <unk> <unk> <unk> <eos>",
},
JAVASCRIPT_CODE: {
text: BASE_TEST_STRINGS.JAVASCRIPT_CODE,
// "tokens": ["let", "a", "=", "obj", ".", "to", "S", "tring();", "to", "S", "tring();"],
ids: [0, 3, 3, 3, 3, 29, 3, 8, 3, 3, 8, 3, 2],
decoded: "<cls> <unk> <unk> <unk> <unk>. <unk> S <unk> <unk> S <unk> <eos>",
},
NEWLINES: {
text: BASE_TEST_STRINGS.NEWLINES,
// "tokens": ["T", "his", "is", "a", "test", "."],
ids: [0, 11, 3, 3, 3, 3, 29, 2],
decoded: "<cls> T <unk> <unk> <unk> <unk>. <eos>",
},
BASIC: {
text: BASE_TEST_STRINGS.BASIC,
// "tokens": ["U", "N", "want\u00e9d,running"],
ids: [0, 26, 17, 3, 2],
decoded: "<cls> U N <unk> <eos>",
},
CONTROL_TOKENS: {
text: BASE_TEST_STRINGS.CONTROL_TOKENS,
// "tokens": ["1\u00002\ufffd3"],
ids: [0, 3, 2],
decoded: "<cls> <unk> <eos>",
},
HELLO_WORLD_TITLECASE: {
text: BASE_TEST_STRINGS.HELLO_WORLD_TITLECASE,
// "tokens": ["H", "ello", "W", "orld"],
ids: [0, 21, 3, 22, 3, 2],
decoded: "<cls> H <unk> W <unk> <eos>",
},
HELLO_WORLD_LOWERCASE: {
text: BASE_TEST_STRINGS.HELLO_WORLD_LOWERCASE,
// "tokens": ["hello", "world"],
ids: [0, 3, 3, 2],
decoded: "<cls> <unk> <unk> <eos>",
},
CHINESE_ONLY: {
text: BASE_TEST_STRINGS.CHINESE_ONLY,
// "tokens": ["\u751f\u6d3b\u7684\u771f\u8c1b\u662f"],
ids: [0, 3, 2],
decoded: "<cls> <unk> <eos>",
},
LEADING_SPACE: {
text: BASE_TEST_STRINGS.LEADING_SPACE,
// "tokens": ["leading", "space"],
ids: [0, 3, 3, 2],
decoded: "<cls> <unk> <unk> <eos>",
},
TRAILING_SPACE: {
text: BASE_TEST_STRINGS.TRAILING_SPACE,
// "tokens": ["trailing", "space"],
ids: [0, 3, 3, 2],
decoded: "<cls> <unk> <unk> <eos>",
},
DOUBLE_SPACE: {
text: BASE_TEST_STRINGS.DOUBLE_SPACE,
// "tokens": ["H", "i", "H", "ello"],
ids: [0, 21, 3, 21, 3, 2],
decoded: "<cls> H <unk> H <unk> <eos>",
},
CURRENCY: {
text: BASE_TEST_STRINGS.CURRENCY,
// "tokens": ["test", "$1", "R", "2", "#3", "\u20ac4", "\u00a35", "\u00a56", "\u20a37", "\u20b98", "\u20b19", "test"],
ids: [0, 3, 3, 10, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2],
decoded: "<cls> <unk> <unk> R <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <unk> <eos>",
},
CURRENCY_WITH_DECIMALS: {
text: BASE_TEST_STRINGS.CURRENCY_WITH_DECIMALS,
// "tokens": ["I", "bought", "an", "apple", "for", "$1", ".", "00", "at", "the", "store", "."],
ids: [0, 12, 3, 3, 3, 3, 3, 29, 3, 3, 3, 3, 29, 2],
decoded: "<cls> I <unk> <unk> <unk> <unk> <unk>. <unk> <unk> <unk> <unk>. <eos>",
},
ELLIPSIS: {
text: BASE_TEST_STRINGS.ELLIPSIS,
// "tokens": ["you\u2026"],
ids: [0, 3, 2],
decoded: "<cls> <unk> <eos>",
},
TEXT_WITH_ESCAPE_CHARACTERS: {
text: BASE_TEST_STRINGS.TEXT_WITH_ESCAPE_CHARACTERS,
// "tokens": ["you\u2026"],
ids: [0, 3, 2],
decoded: "<cls> <unk> <eos>",
},
TEXT_WITH_ESCAPE_CHARACTERS_2: {
text: BASE_TEST_STRINGS.TEXT_WITH_ESCAPE_CHARACTERS_2,
// "tokens": ["you\u2026", "you\u2026"],
ids: [0, 3, 3, 2],
decoded: "<cls> <unk> <unk> <eos>",
},
TILDE_NORMALIZATION: {
text: BASE_TEST_STRINGS.TILDE_NORMALIZATION,
// "tokens": ["weird", "\uff5e", "edge", "\uff5e", "case"],
ids: [0, 3, 3, 3, 3, 3, 2],
decoded: "<cls> <unk> <unk> <unk> <unk> <unk> <eos>",
},
SPIECE_UNDERSCORE: {
text: BASE_TEST_STRINGS.SPIECE_UNDERSCORE,
// "tokens": ["\u2581", "T", "his", "\u2581is", "\u2581a", "\u2581test", "\u2581", "."],
ids: [0, 3, 11, 3, 3, 3, 3, 3, 29, 2],
decoded: "<cls> <unk> T <unk> <unk> <unk> <unk> <unk>. <eos>",
},
SPECIAL_TOKENS: {
text: ESM_TEST_STRINGS.SPECIAL_TOKENS,
// "tokens": ["<unk>", "<pad>", "<mask>", "<cls>", "<eos>", "<bos>"],
ids: [0, 3, 1, 32, 0, 2, 3, 2],
decoded: "<cls> <unk> <pad> <mask> <cls> <eos> <unk> <eos>",
},
PROTEIN_SEQUENCES_1: {
text: ESM_TEST_STRINGS.PROTEIN_SEQUENCES_1,
tokens: ["A", "T", "T", "C", "C", "G", "A", "T", "T", "C", "C", "G", "A", "T", "T", "C", "C", "G"],
ids: [0, 5, 11, 11, 23, 23, 6, 5, 11, 11, 23, 23, 6, 5, 11, 11, 23, 23, 6, 2],
decoded: "<cls> A T T C C G A T T C C G A T T C C G <eos>",
},
PROTEIN_SEQUENCES_2: {
text: ESM_TEST_STRINGS.PROTEIN_SEQUENCES_2,
tokens: ["A", "T", "T", "T", "C", "T", "C", "T", "C", "T", "C", "T", "C", "T", "C", "T", "G", "A", "G", "A", "T", "C", "G", "A", "T", "C", "G", "A", "T", "C", "G", "A", "T"],
ids: [0, 5, 11, 11, 11, 23, 11, 23, 11, 23, 11, 23, 11, 23, 11, 23, 11, 6, 5, 6, 5, 11, 23, 6, 5, 11, 23, 6, 5, 11, 23, 6, 5, 11, 2],
decoded: "<cls> A T T T C T C T C T C T C T C T G A G A T C G A T C G A T C G A T <eos>",
},
},
};
| transformers.js/tests/models/esm/test_tokenization_esm.js/0 | {
"file_path": "transformers.js/tests/models/esm/test_tokenization_esm.js",
"repo_id": "transformers.js",
"token_count": 6890
} |
import { GroundingDinoProcessor, GroundingDinoForObjectDetection, RawImage } from "../../../src/transformers.js";
import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js";
export default () => {
const text = "a cat."; // NB: text query needs to be lowercased + end with a dot
// Empty white image
const dims = [224, 224, 3];
const image = new RawImage(new Uint8ClampedArray(dims[0] * dims[1] * dims[2]).fill(255), ...dims);
describe("GroundingDinoForObjectDetection", () => {
const model_id = "hf-internal-testing/tiny-random-GroundingDinoForObjectDetection";
/** @type {GroundingDinoForObjectDetection} */
let model;
/** @type {GroundingDinoProcessor} */
let processor;
beforeAll(async () => {
model = await GroundingDinoForObjectDetection.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS);
processor = await GroundingDinoProcessor.from_pretrained(model_id);
}, MAX_MODEL_LOAD_TIME);
it(
"forward",
async () => {
const inputs = await processor(image, text);
const { d_model, num_queries } = model.config;
const { logits, pred_boxes } = await model(inputs);
expect(logits.dims).toEqual([1, num_queries, d_model]);
expect(pred_boxes.dims).toEqual([1, num_queries, 4]);
expect(logits.max().item()).toBeCloseTo(56.237613677978516, 2);
expect(logits.min().item()).toEqual(-Infinity);
expect(pred_boxes.mean().item()).toEqual(0.2500016987323761);
},
MAX_TEST_EXECUTION_TIME,
);
afterAll(async () => {
await model?.dispose();
}, MAX_MODEL_DISPOSE_TIME);
});
};
| transformers.js/tests/models/grounding_dino/test_modeling_grounding_dino.js/0 | {
"file_path": "transformers.js/tests/models/grounding_dino/test_modeling_grounding_dino.js",
"repo_id": "transformers.js",
"token_count": 681
} |
import { AutoImageProcessor, MobileViTFeatureExtractor, MobileViTImageProcessor } from "../../../src/transformers.js";
import { load_cached_image } from "../../asset_cache.js";
import { MAX_PROCESSOR_LOAD_TIME, MAX_TEST_EXECUTION_TIME } from "../../init.js";
export default () => {
// MobileViTFeatureExtractor
describe("MobileViTFeatureExtractor (default)", () => {
const model_id = "Xenova/mobilevit-small";
/** @type {MobileViTFeatureExtractor} */
let processor;
beforeAll(async () => {
processor = await AutoImageProcessor.from_pretrained(model_id);
}, MAX_PROCESSOR_LOAD_TIME);
it(
"default",
async () => {
const image = await load_cached_image("tiger");
const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);
expect(pixel_values.dims).toEqual([1, 3, 256, 256]);
expect(pixel_values.mean().item()).toBeCloseTo(0.4599160496887033, 6);
expect(original_sizes).toEqual([[408, 612]]);
expect(reshaped_input_sizes).toEqual([[256, 256]]);
},
MAX_TEST_EXECUTION_TIME,
);
});
// MobileViTFeatureExtractor
// - tests not converting to rgb (do_convert_rgb=false)
describe("MobileViTFeatureExtractor (do_convert_rgb=false)", () => {
const model_id = "Xenova/quickdraw-mobilevit-small";
/** @type {MobileViTFeatureExtractor} */
let processor;
beforeAll(async () => {
processor = await AutoImageProcessor.from_pretrained(model_id);
}, MAX_PROCESSOR_LOAD_TIME);
it(
"grayscale image",
async () => {
const image = await load_cached_image("skateboard");
const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);
expect(pixel_values.dims).toEqual([1, 1, 28, 28]);
expect(pixel_values.mean().item()).toBeCloseTo(0.08558923671585128, 6);
expect(original_sizes).toEqual([[28, 28]]);
expect(reshaped_input_sizes).toEqual([[28, 28]]);
},
MAX_TEST_EXECUTION_TIME,
);
});
// MobileViTImageProcessor
// - tests converting RGB to BGR (do_flip_channel_order=true)
describe("MobileViTImageProcessor (do_flip_channel_order=true)", () => {
const model_id = "Xenova/mobilevitv2-1.0-imagenet1k-256";
/** @type {MobileViTImageProcessor} */
let processor;
beforeAll(async () => {
processor = await AutoImageProcessor.from_pretrained(model_id);
}, MAX_PROCESSOR_LOAD_TIME);
it(
"RGB to BGR",
async () => {
const image = await load_cached_image("cats");
const { pixel_values, original_sizes, reshaped_input_sizes } = await processor(image);
const { data, dims } = pixel_values;
expect(dims).toEqual([1, 3, 256, 256]);
expect(pixel_values.mean().item()).toBeCloseTo(0.5215385556221008, 2);
expect(original_sizes).toEqual([[480, 640]]);
expect(reshaped_input_sizes).toEqual([[256, 256]]);
// Ensure RGB to BGR conversion
expect(data.slice(0, 3)).toBeCloseToNested([0.24313725531101227, 0.250980406999588, 0.364705890417099], 4);
},
MAX_TEST_EXECUTION_TIME,
);
});
};
| transformers.js/tests/models/mobilevit/test_image_processing_mobilevit.js/0 | {
"file_path": "transformers.js/tests/models/mobilevit/test_image_processing_mobilevit.js",
"repo_id": "transformers.js",
"token_count": 1322
} |
import { PatchTSTModel, PatchTSTForPrediction, Tensor } from "../../../src/transformers.js";
import { MAX_MODEL_LOAD_TIME, MAX_TEST_EXECUTION_TIME, MAX_MODEL_DISPOSE_TIME, DEFAULT_MODEL_OPTIONS } from "../../init.js";
export default () => {
const dims = [64, 512, 7];
const prod = dims.reduce((a, b) => a * b, 1);
const past_values = new Tensor(
"float32",
Float32Array.from({ length: prod }, (_, i) => i / prod),
dims,
);
describe("PatchTSTModel", () => {
const model_id = "hf-internal-testing/tiny-random-PatchTSTModel";
/** @type {PatchTSTModel} */
let model;
beforeAll(async () => {
model = await PatchTSTModel.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS);
}, MAX_MODEL_LOAD_TIME);
it(
"default",
async () => {
const { last_hidden_state } = await model({ past_values });
const { num_input_channels, d_model } = model.config;
expect(last_hidden_state.dims).toEqual([dims[0], num_input_channels, 43, d_model]);
expect(last_hidden_state.mean().item()).toBeCloseTo(0.016672514379024506, 5);
},
MAX_TEST_EXECUTION_TIME,
);
afterAll(async () => {
await model?.dispose();
}, MAX_MODEL_DISPOSE_TIME);
});
describe("PatchTSTForPrediction", () => {
const model_id = "onnx-community/granite-timeseries-patchtst";
/** @type {PatchTSTForPrediction} */
let model;
beforeAll(async () => {
model = await PatchTSTForPrediction.from_pretrained(model_id, DEFAULT_MODEL_OPTIONS);
}, MAX_MODEL_LOAD_TIME);
it(
"default",
async () => {
const { prediction_outputs } = await model({ past_values });
const { prediction_length, num_input_channels } = model.config;
expect(prediction_outputs.dims).toEqual([dims[0], prediction_length, num_input_channels]);
expect(prediction_outputs.mean().item()).toBeCloseTo(0.506528377532959, 5);
},
MAX_TEST_EXECUTION_TIME,
);
afterAll(async () => {
await model?.dispose();
}, MAX_MODEL_DISPOSE_TIME);
});
};
| transformers.js/tests/models/patchtst/test_modeling_patchtst.js/0 | {
"file_path": "transformers.js/tests/models/patchtst/test_modeling_patchtst.js",
"repo_id": "transformers.js",
"token_count": 870
} |
Subsets and Splits