model_id
stringlengths
9
102
model_card
stringlengths
4
343k
model_labels
listlengths
2
50.8k
yainage90/fashion-object-detection
This model is fine-tuned version of microsoft/conditional-detr-resnet-50. You can find details of model in this github repo -> [fashion-visual-search](https://github.com/yainage90/fashion-visual-search) And you can find fashion image feature extractor model -> [yainage90/fashion-image-feature-extractor](https://huggingface.co/yainage90/fashion-image-feature-extractor) This model was trained using a combination of two datasets: [modanet](https://github.com/eBay/modanet) and [fashionpedia](https://fashionpedia.github.io/home/) The labels are ['bag', 'bottom', 'dress', 'hat', 'shoes', 'outer', 'top'] In the 96th epoch out of total of 100 epochs, the best score was achieved with mAP 0.7542. Therefore, it is believed that there is a little room for performance improvement. ``` python from PIL import Image import torch from transformers import AutoImageProcessor, AutoModelForObjectDetection device = 'cpu' if torch.cuda.is_available(): device = torch.device('cuda') elif torch.backends.mps.is_available(): device = torch.device('mps') ckpt = 'yainage90/fashion-object-detection' image_processor = AutoImageProcessor.from_pretrained(ckpt) model = AutoModelForObjectDetection.from_pretrained(ckpt).to(device) image = Image.open('<path/to/image>').convert('RGB') with torch.no_grad(): inputs = image_processor(images=[image], return_tensors="pt") outputs = model(**inputs.to(device)) target_sizes = torch.tensor([[image.size[1], image.size[0]]]) results = image_processor.post_process_object_detection(outputs, threshold=0.4, target_sizes=target_sizes)[0] items = [] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): score = score.item() label = label.item() box = [i.item() for i in box] print(f"{model.config.id2label[label]}: {round(score, 3)} at {box}") items.append((score, label, box)) ``` ![sample_image](sample_image.png)
[ "bag", "bottom", "dress", "hat", "outer", "shoes", "top" ]
hustvl/yolos-small
# YOLOS (small-sized) model YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training The model was pre-trained for 200 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO. ## Evaluation results This model achieves an AP (average precision) of **36.1** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
nickmuchi/yolos-small-finetuned-license-plate-detection
# YOLOS (small-sized) model This model is a fine-tuned version of [hustvl/yolos-small](https://huggingface.co/hustvl/yolos-small) on the [licesne-plate-recognition](https://app.roboflow.com/objectdetection-jhgr1/license-plates-recognition/2) dataset from Roboflow which contains 5200 images in the training set and 380 in the validation set. The original YOLOS model was fine-tuned on COCO 2017 object detection (118k annotated images). ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'https://drive.google.com/uc?id=1p9wJIqRz3W50e2f_A0D8ftla8hoXz4T5' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('nickmuchi/yolos-small-finetuned-license-plate-detection') model = YolosForObjectDetection.from_pretrained('nickmuchi/yolos-small-finetuned-license-plate-detection') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding face mask detection classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training This model was fine-tuned for 200 epochs on the [licesne-plate-recognition](https://app.roboflow.com/objectdetection-jhgr1/license-plates-recognition/2). ## Evaluation results This model achieves an AP (average precision) of **49.0**. Accumulating evaluation results... IoU metric: bbox Metrics | Metric Parameter | Location | Dets | Value | ---------------- | --------------------- | ------------| ------------- | ----- | Average Precision | (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.490 | Average Precision | (AP) @[ IoU=0.50 | area= all | maxDets=100 ] | 0.792 | Average Precision | (AP) @[ IoU=0.75 | area= all | maxDets=100 ] | 0.585 | Average Precision | (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.167 | Average Precision | (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.460 | Average Precision | (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.824 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] | 0.447 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] | 0.671 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.676 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.278 | Average Recall | (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.641 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.890 |
[ "name", "license-plates" ]
kariver/detr-resnet-50_finetuned_food-roboflow
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_food-roboflow This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Tokenizers 0.14.1
[ "akabare khursani", "apple", "artichoke", "ash gourd -kubhindo-", "asparagus -kurilo-", "avocado", "bacon", "bamboo shoots -tama-", "banana", "beans", "beaten rice -chiura-", "beef", "beetroot", "bethu ko saag", "bitter gourd", "black lentils", "black beans", "bottle gourd -lauka-", "bread", "brinjal", "broad beans -bakullo-", "broccoli", "buff meat", "butter", "cabbage", "capsicum", "carrot", "cassava -ghar tarul-", "cauliflower", "chayote-iskus-", "cheese", "chicken gizzards", "chicken", "chickpeas", "chili pepper -khursani-", "chili powder", "chowmein noodles", "cinnamon", "coriander -dhaniya-", "corn", "cornflakec", "crab meat", "cucumber", "egg", "farsi ko munta", "fiddlehead ferns -niguro-", "fish", "garden peas", "garden cress-chamsur ko saag-", "garlic", "ginger", "green brinjal", "green lentils", "green mint -pudina-", "green peas", "green soyabean -hariyo bhatmas-", "gundruk", "ham", "ice", "jack fruit", "ketchup", "lapsi -nepali hog plum-", "lemon -nimbu-", "lime -kagati-", "long beans -bodi-", "masyaura", "milk", "minced meat", "moringa leaves -sajyun ko munta-", "mushroom", "mutton", "nutrela -soya chunks-", "okra -bhindi-", "olive oil", "onion leaves", "onion", "orange", "palak -indian spinach-", "palungo -nepali spinach-", "paneer", "papaya", "pea", "pear", "pointed gourd -chuche karela-", "pork", "potato", "pumpkin -farsi-", "radish", "rahar ko daal", "rayo ko saag", "red beans", "red lentils", "rice -chamal-", "sajjyun -moringa drumsticks-", "salt", "sausage", "snake gourd -chichindo-", "soy sauce", "soyabean -bhatmas-", "sponge gourd -ghiraula-", "stinging nettle -sisnu-", "strawberry", "sugar", "sweet potato -suthuni-", "taro leaves -karkalo-", "taro root-pidalu-", "thukpa noodles", "tofu", "tomato", "tori ko saag", "tree tomato -rukh tamatar-", "turnip", "wallnut", "water melon", "wheat", "yellow lentils", "kimchi", "mayonnaise", "noodle", "seaweed" ]
ustc-community/dfine-xlarge-obj2coco
## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-xlarge-obj2coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-xlarge-obj2coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
[ "person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "sofa", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
davanstrien/detr_beyond_words
# detr_beyond_words (WIP) [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) fine tuned on [Beyond Words](https://github.com/LibraryOfCongress/newspaper-navigator/tree/master/beyond_words_data).
[ "photograph", "illustration", "map", "comics/cartoon", "editorial cartoon", "headline", "advertisement" ]
facebook/detr-resnet-101-dc5
# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-dc5') model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **44.9** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
facebook/detr-resnet-101
# DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # you can specify the revision tag if you don't want the timm dependency processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-101", revision="no_timm") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-101", revision="no_timm") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output (something along the lines of): ``` Detected cat with confidence 0.998 at location [344.06, 24.85, 640.34, 373.74] Detected remote with confidence 0.997 at location [328.13, 75.93, 372.81, 187.66] Detected remote with confidence 0.997 at location [39.34, 70.13, 175.56, 118.78] Detected cat with confidence 0.998 at location [15.36, 51.75, 316.89, 471.16] Detected couch with confidence 0.995 at location [-0.19, 0.71, 639.73, 474.17] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **43.5** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
facebook/detr-resnet-50-dc5
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5') model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50-dc5') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **43.3** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
facebook/detr-resnet-50
# DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # you can specify the revision tag if you don't want the timm dependency processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output: ``` Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98] Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66] Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76] Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93] Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "traffic light", "fire hydrant", "street sign", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "bicycle", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "hat", "backpack", "umbrella", "shoe", "car", "eye glasses", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "motorcycle", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "plate", "wine glass", "cup", "fork", "knife", "airplane", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "bus", "donut", "cake", "chair", "couch", "potted plant", "bed", "mirror", "dining table", "window", "desk", "train", "toilet", "door", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "truck", "toaster", "sink", "refrigerator", "blender", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "boat", "toothbrush" ]
SenseTime/deformable-detr-single-scale-dc5
# Deformable DETR model with ResNet-50 backbone, single scale + dilation Deformable DEtection TRansformer (DETR) single scale + dilation model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Zhu et al. and first released in [this repository](https://github.com/fundamentalvision/Deformable-DETR). Disclaimer: The team releasing Deformable DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr-single-scale-dc5") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr-single-scale-dc5") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Deformable DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
SenseTime/deformable-detr-single-scale
# Deformable DETR model with ResNet-50 backbone, single scale Deformable DEtection TRansformer (DETR), single scale model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Zhu et al. and first released in [this repository](https://github.com/fundamentalvision/Deformable-DETR). Disclaimer: The team releasing Deformable DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr-single-scale") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr-single-scale") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Deformable DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
SenseTime/deformable-detr-with-box-refine-two-stage
# Deformable DETR model with ResNet-50 backbone, with box refinement and two stage Deformable DEtection TRansformer (DETR), with box refinement and two stage model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Zhu et al. and first released in [this repository](https://github.com/fundamentalvision/Deformable-DETR). Disclaimer: The team releasing Deformable DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr-with-box-refine-two-stage") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr-with-box-refine-two-stage") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Deformable DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
SenseTime/deformable-detr-with-box-refine
# Deformable DETR model with ResNet-50 backbone, with box refinement Deformable DEtection TRansformer (DETR), with box refinement trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Zhu et al. and first released in [this repository](https://github.com/fundamentalvision/Deformable-DETR). Disclaimer: The team releasing Deformable DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr-with-box-refine") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr-with-box-refine") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Deformable DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
SenseTime/deformable-detr
# Deformable DETR model with ResNet-50 backbone Deformable DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Zhu et al. and first released in [this repository](https://github.com/fundamentalvision/Deformable-DETR). Disclaimer: The team releasing Deformable DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output: ``` Detected cat with confidence 0.856 at location [342.19, 24.3, 640.02, 372.25] Detected remote with confidence 0.739 at location [40.79, 72.78, 176.76, 117.25] Detected cat with confidence 0.859 at location [16.5, 52.84, 318.25, 470.78] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Deformable DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
TahaDouaji/detr-doc-table-detection
# Model Card for detr-doc-table-detection # Model Details detr-doc-table-detection is a model trained to detect both **Bordered** and **Borderless** tables in documents, based on [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50). - **Developed by:** Taha Douaji - **Shared by [Optional]:** Taha Douaji - **Model type:** Object Detection - **Language(s) (NLP):** More information needed - **License:** More information needed - **Parent Model:** [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) - **Resources for more information:** - [Model Demo Space](https://huggingface.co/spaces/trevbeers/pdf-table-extraction) - [Associated Paper](https://arxiv.org/abs/2005.12872) # Uses ## Direct Use This model can be used for the task of object detection. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The model was trained on ICDAR2019 Table Dataset # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). # Citation **BibTeX:** ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` # Model Card Authors [optional] Taha Douaji in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests image = Image.open("IMAGE_PATH") processor = DetrImageProcessor.from_pretrained("TahaDouaji/detr-doc-table-detection") model = DetrForObjectDetection.from_pretrained("TahaDouaji/detr-doc-table-detection") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ```
[ "table", "table" ]
hustvl/yolos-tiny
# YOLOS (tiny-sized) model YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosImageProcessor, YolosForObjectDetection from PIL import Image import torch import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny') image_processor = YolosImageProcessor.from_pretrained("hustvl/yolos-tiny") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes # print results target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 300 epochs on COCO. ## Evaluation results This model achieves an AP (average precision) of **28.7** on COCO 2017 validation. For more details regarding evaluation results, we refer to the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
hustvl/yolos-base
# YOLOS (base-sized) model YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-base') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-base') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training The model was pre-trained for 1000 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO. ## Evaluation results This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
hustvl/yolos-small-dwr
# YOLOS (small-sized, fast model scaling) model YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small-dwr') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small-dwr') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO. ## Evaluation results This model achieves an AP (average precision) of **37.6** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
hustvl/yolos-small-300
# YOLOS (small-sized) model (300 pre-train epochs) YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosImageProcessor, YolosForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = YolosImageProcessor.from_pretrained('hustvl/yolos-small-300') model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small-300') inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the image processor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO. ## Evaluation results This model achieves an AP (average precision) of **36.1** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-00666, author = {Yuxin Fang and Bencheng Liao and Xinggang Wang and Jiemin Fang and Jiyang Qi and Rui Wu and Jianwei Niu and Wenyu Liu}, title = {You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection}, journal = {CoRR}, volume = {abs/2106.00666}, year = {2021}, url = {https://arxiv.org/abs/2106.00666}, eprinttype = {arXiv}, eprint = {2106.00666}, timestamp = {Fri, 29 Apr 2022 19:49:16 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
nickmuchi/yolos-small-finetuned-masks
# YOLOS (small-sized) model The original YOLOS model was fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). This model was further fine-tuned on the [face mask dataset]("https://www.kaggle.com/datasets/andrewmvd/face-mask-detection") from Kaggle. The dataset consists of 853 images of people with annotations categorised as "with mask","without mask" and "mask not worn correctly". The model was trained for 200 epochs on a single GPU usins Google Colab ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'https://drive.google.com/uc?id=1VwYLbGak5c-2P5qdvfWVOeg7DTDYPbro' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('nickmuchi/yolos-small-finetuned-masks') model = YolosForObjectDetection.from_pretrained('nickmuchi/yolos-small-finetuned-masks') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding face mask detection classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training This model was fine-tuned for 200 epochs on the [face-mask-dataset]("https://www.kaggle.com/datasets/andrewmvd/face-mask-detection"). ## Evaluation results This model achieves an AP (average precision) of **53.2**. Accumulating evaluation results... IoU metric: bbox Metrics | Metric Parameter | Location | Dets | Value | ---------------- | --------------------- | ------------| ------------- | ----- | Average Precision | (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.273 | Average Precision | (AP) @[ IoU=0.50 | area= all | maxDets=100 ] | 0.532 | Average Precision | (AP) @[ IoU=0.75 | area= all | maxDets=100 ] | 0.257 | Average Precision | (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.220 | Average Precision | (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.341 | Average Precision | (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.545 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] | 0.154 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] | 0.361 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.415 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.349 | Average Recall | (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.469 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.584 |
[ "masks", "mask_worn_incorrectly", "with_mask", "without_mask" ]
nickmuchi/yolos-small-rego-plates-detection
# YOLOS (small-sized) model The original YOLOS model was fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS). This model was further fine-tuned on the [license plate dataset]("https://www.kaggle.com/datasets/andrewmvd/car-plate-detection") from Kaggle. The dataset consists of 735 images of annotations categorised as "vehicle" and "license-plate". The model was trained for 200 epochs on a single GPU using Google Colab ## Model description YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN). ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models. ### How to use Here is how to use this model: ```python from transformers import YolosFeatureExtractor, YolosForObjectDetection from PIL import Image import requests url = 'https://drive.google.com/uc?id=1p9wJIqRz3W50e2f_A0D8ftla8hoXz4T5' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = YolosFeatureExtractor.from_pretrained('nickmuchi/yolos-small-rego-plates-detection') model = YolosForObjectDetection.from_pretrained('nickmuchi/yolos-small-rego-plates-detection') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding face mask detection classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### Training This model was fine-tuned for 200 epochs on the [license plate dataset]("https://www.kaggle.com/datasets/andrewmvd/car-plate-detection"). ## Evaluation results This model achieves an AP (average precision) of **47.9**. Accumulating evaluation results... IoU metric: bbox Metrics | Metric Parameter | Location | Dets | Value | ---------------- | --------------------- | ------------| ------------- | ----- | Average Precision | (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.479 | Average Precision | (AP) @[ IoU=0.50 | area= all | maxDets=100 ] | 0.752 | Average Precision | (AP) @[ IoU=0.75 | area= all | maxDets=100 ] | 0.555 | Average Precision | (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.147 | Average Precision | (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.420 | Average Precision | (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.804 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] | 0.437 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] | 0.641 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] | 0.676 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] | 0.268 | Average Recall | (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] | 0.641 | Average Recall | (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] | 0.870 |
[ "name", "license-plates", "vehicle" ]
nielsr/detr-table-detection
Hi, Please don't use this model anymore, it only worked for a specific branch of mine. From now on it's recommended to use https://huggingface.co/microsoft/table-transformer-detection from Transformers. Thanks, have a great day
[ "table", "table rotated" ]
nielsr/detr-table-structure-recognition
Hi, Please don't use this model anymore, it only worked for a specific branch of mine. From now on it's recommended to use https://huggingface.co/microsoft/table-transformer-structure-recognition from Transformers. Thanks, have a great day
[ "table", "table column", "table row", "table column header", "table projected row header", "table spanning cell" ]
microsoft/conditional-detr-resnet-50
# Conditional DETR model with ResNet-50 backbone Conditional DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Meng et al. and first released in [this repository](https://github.com/Atten4Vis/ConditionalDETR). ## Model description The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/conditional_detr_curve.jpg) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=microsoft/conditional-detr) to look for all available Conditional DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, ConditionalDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50") model = ConditionalDetrForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output: ``` Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45] Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0] Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The Conditional DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ### BibTeX entry and citation info ```bibtex @inproceedings{MengCFZLYS021, author = {Depu Meng and Xiaokang Chen and Zejia Fan and Gang Zeng and Houqiang Li and Yuhui Yuan and Lei Sun and Jingdong Wang}, title = {Conditional {DETR} for Fast Training Convergence}, booktitle = {2021 {IEEE/CVF} International Conference on Computer Vision, {ICCV} 2021, Montreal, QC, Canada, October 10-17, 2021}, } ```
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
SalML/DETR-table-detection
# The models are taken from https://github.com/microsoft/table-transformer/ # Original model now on MSFT org: https://huggingface.co/microsoft/table-transformer-detection I have built a HuggingFace Space: https://huggingface.co/spaces/SalML/TableTransformer2CSV It runs an OCR on the table-transformer output image to obtain a CSV downloadable table.
[ "table", "table rotated" ]
SalML/DETR-table-structure-recognition
# The models are taken from https://github.com/microsoft/table-transformer/ # Original model now on MSFT org: https://huggingface.co/microsoft/table-transformer-structure-recognition I have built a HuggingFace Space: https://huggingface.co/spaces/SalML/TableTransformer2CSV It runs an OCR on the table-transformer output image to obtain a CSV downloadable table.
[ "table", "table column", "table row", "table column header", "table projected row header", "table spanning cell" ]
microsoft/table-transformer-detection
# Table Transformer (fine-tuned for Table Detection) Table Transformer (DETR) model trained on PubTables1M. It was introduced in the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer). Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention. ## Usage You can use the raw model for detecting tables in documents. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info.
[ "table", "table rotated" ]
microsoft/table-transformer-structure-recognition
# Table Transformer (fine-tuned for Table Structure Recognition) Table Transformer (DETR) model trained on PubTables1M. It was introduced in the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer). Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention. ## Usage You can use the raw model for detecting the structure (like rows, columns) in tables. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info.
[ "table", "table column", "table row", "table column header", "table projected row header", "table spanning cell" ]
Narsil/layoutlm-funsd
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 1.0045 - Answer: {'precision': 0.7348314606741573, 'recall': 0.8084054388133498, 'f1': 0.7698646262507357, 'number': 809} - Header: {'precision': 0.44285714285714284, 'recall': 0.5210084033613446, 'f1': 0.47876447876447875, 'number': 119} - Question: {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065} - Overall Precision: 0.7599 - Overall Recall: 0.8083 - Overall F1: 0.7866 - Overall Accuracy: 0.8106 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ## Deploy Model with Inference Endpoints Before we can get started, make sure you meet all of the following requirements: 1. An Organization/User with an active plan and *WRITE* access to the model repository. 2. Can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints) ### 1. Deploy LayoutLM and Send requests In this tutorial, you will learn how to deploy a [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products. This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler) We are going to deploy [philschmid/layoutlm-funsd](https://huggingface.co/philschmid/layoutlm-funsd) which implements the following `handler.py` ```python from typing import Dict, List, Any from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor import torch from subprocess import run # install tesseract-ocr and pytesseract run("apt install -y tesseract-ocr", shell=True, check=True) run("pip install pytesseract", shell=True, check=True) # helper function to unnormalize bboxes for drawing onto the image def unnormalize_box(bbox, width, height): return [ width * (bbox[0] / 1000), height * (bbox[1] / 1000), width * (bbox[2] / 1000), height * (bbox[3] / 1000), ] # set device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") class EndpointHandler: def __init__(self, path=""): # load model and processor from path self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device) self.processor = LayoutLMv2Processor.from_pretrained(path) def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]: """ Args: data (:obj:): includes the deserialized image file as PIL.Image """ # process input image = data.pop("inputs", data) # process image encoding = self.processor(image, return_tensors="pt") # run prediction with torch.inference_mode(): outputs = self.model( input_ids=encoding.input_ids.to(device), bbox=encoding.bbox.to(device), attention_mask=encoding.attention_mask.to(device), token_type_ids=encoding.token_type_ids.to(device), ) predictions = outputs.logits.softmax(-1) # post process output result = [] for item, inp_ids, bbox in zip( predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu() ): label = self.model.config.id2label[int(item.argmax().cpu())] if label == "O": continue score = item.max().item() text = self.processor.tokenizer.decode(inp_ids) bbox = unnormalize_box(bbox.tolist(), image.width, image.height) result.append({"label": label, "score": score, "text": text, "bbox": bbox}) return {"predictions": result} ``` ### 2. Send HTTP request using Python Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`) ```python import json import requests as r import mimetypes ENDPOINT_URL="" # url of your endpoint HF_TOKEN="" # organization token where you deployed your endpoint def predict(path_to_image:str=None): with open(path_to_image, "rb") as i: b = i.read() headers= { "Authorization": f"Bearer {HF_TOKEN}", "Content-Type": mimetypes.guess_type(path_to_image)[0] } response = r.post(ENDPOINT_URL, headers=headers, data=b) return response.json() prediction = predict(path_to_image="path_to_your_image.png") print(prediction) # {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]}, ``` ### 3. Draw result on image To get a better understanding of what the model predicted you can also draw the predictions on the provided image. ```python from PIL import Image, ImageDraw, ImageFont # draw results on image def draw_result(path_to_image,result): image = Image.open(path_to_image) label2color = { "B-HEADER": "blue", "B-QUESTION": "red", "B-ANSWER": "green", "I-HEADER": "blue", "I-QUESTION": "red", "I-ANSWER": "green", } # draw predictions over the image draw = ImageDraw.Draw(image) font = ImageFont.load_default() for res in result: draw.rectangle(res["bbox"], outline="black") draw.rectangle(res["bbox"], outline=label2color[res["label"]]) draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font) return image draw_result("path_to_your_image.png", prediction["predictions"]) ```
[ "o", "b-header", "i-header", "b-question", "i-question", "b-answer", "i-answer" ]
valentinafeve/yolos-fashionpedia
This is a fine-tunned object detection model for fashion. For more details of the implementation you can check the source code [here](https://github.com/valntinaf/fine_tunning_YOLOS_for_fashion) the dataset used for its training is available [here](https://huggingface.co/datasets/detection-datasets/fashionpedia) this model supports the following categories: CATS = ['shirt, blouse', 'top, t-shirt, sweatshirt', 'sweater', 'cardigan', 'jacket', 'vest', 'pants', 'shorts', 'skirt', 'coat', 'dress', 'jumpsuit', 'cape', 'glasses', 'hat', 'headband, head covering, hair accessory', 'tie', 'glove', 'watch', 'belt', 'leg warmer', 'tights, stockings', 'sock', 'shoe', 'bag, wallet', 'scarf', 'umbrella', 'hood', 'collar', 'lapel', 'epaulette', 'sleeve', 'pocket', 'neckline', 'buckle', 'zipper', 'applique', 'bead', 'bow', 'flower', 'fringe', 'ribbon', 'rivet', 'ruffle', 'sequin', 'tassel'] ![image](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*q8TTgxX_gf6vRe5AJN2r4g.png)
[ "shirt, blouse", "top, t-shirt, sweatshirt", "sweater", "cardigan", "jacket", "vest", "pants", "shorts", "skirt", "coat", "dress", "jumpsuit", "cape", "glasses", "hat", "headband, head covering, hair accessory", "tie", "glove", "watch", "belt", "leg warmer", "tights, stockings", "sock", "shoe", "bag, wallet", "scarf", "umbrella", "hood", "collar", "lapel", "epaulette", "sleeve", "pocket", "neckline", "buckle", "zipper", "applique", "bead", "bow", "flower", "fringe", "ribbon", "rivet", "ruffle", "sequin", "tassel" ]
Rahul-2022/detr-base-sroie
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-base-sroie This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
[ "other", "address", "date", "company", "total", "line_total", "line_description" ]
davanstrien/detr-resnet-50_fine_tuned_trade_dir
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_fine_tuned_trade_dir This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
[ "image", "main heading (caps)", "page header (trades)", "running heads", "section title", "text box" ]
Narsil/layoutlmv3-finetuned-funsd
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-funsd This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the nielsr/funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.1164 - Precision: 0.9026 - Recall: 0.913 - F1: 0.9078 - Accuracy: 0.8330 The script for training can be found here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 10.0 | 100 | 0.5238 | 0.8366 | 0.886 | 0.8606 | 0.8410 | | No log | 20.0 | 200 | 0.6930 | 0.8751 | 0.8965 | 0.8857 | 0.8322 | | No log | 30.0 | 300 | 0.7784 | 0.8902 | 0.908 | 0.8990 | 0.8414 | | No log | 40.0 | 400 | 0.9056 | 0.8916 | 0.905 | 0.8983 | 0.8364 | | 0.2429 | 50.0 | 500 | 1.0016 | 0.8954 | 0.9075 | 0.9014 | 0.8298 | | 0.2429 | 60.0 | 600 | 1.0097 | 0.8899 | 0.897 | 0.8934 | 0.8294 | | 0.2429 | 70.0 | 700 | 1.0722 | 0.9035 | 0.9085 | 0.9060 | 0.8315 | | 0.2429 | 80.0 | 800 | 1.0884 | 0.8905 | 0.9105 | 0.9004 | 0.8269 | | 0.2429 | 90.0 | 900 | 1.1292 | 0.8938 | 0.909 | 0.9013 | 0.8279 | | 0.0098 | 100.0 | 1000 | 1.1164 | 0.9026 | 0.913 | 0.9078 | 0.8330 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
[ "o", "b-header", "i-header", "b-question", "i-question", "b-answer", "i-answer" ]
Narsil/layoutlmv2-finetuned-funsd
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-funsd This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the funsd dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.9.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 1.9.0 - Tokenizers 0.10.3
[ "other", "b-header", "i-header", "b-question", "i-question", "b-answer", "i-answer" ]
jozhang97/deta-resnet-50
# Detection Transformers with Assignment By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/) From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137). **TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO).
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
liaujianjie/detr-resnet-50
Fork of [DETR (End-to-End Object Detection) model with ResNet-50 backbone](https://huggingface.co/facebook/detr-resnet-50) Just messing around
[ "n/a", "person", "traffic light", "fire hydrant", "street sign", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "bicycle", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "hat", "backpack", "umbrella", "shoe", "car", "eye glasses", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "motorcycle", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "plate", "wine glass", "cup", "fork", "knife", "airplane", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "bus", "donut", "cake", "chair", "couch", "potted plant", "bed", "mirror", "dining table", "window", "desk", "train", "toilet", "door", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "truck", "toaster", "sink", "refrigerator", "blender", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "boat", "toothbrush" ]
emre/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results Step Training Loss 300 2.162200 600 2.011000 1200 1.779500 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
deeplearnersk/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Shebrain/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
devonho/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.8.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
jozhang97/deta-swin-large
# Detection Transformers with Assignment By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/) From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137). **TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO).
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
jozhang97/deta-swin-large-o365
# Detection Transformers with Assignment By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/) From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137). **TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO).
[ "none", "person", "sneakers", "chair", "other shoes", "hat", "car", "lamp", "glasses", "bottle", "desk", "cup", "street lights", "cabinet/shelf", "handbag/satchel", "bracelet", "plate", "picture/frame", "helmet", "book", "gloves", "storage box", "boat", "leather shoes", "flower", "bench", "potted plant", "bowl/basin", "flag", "pillow", "boots", "vase", "microphone", "necklace", "ring", "suv", "wine glass", "belt", "monitor/tv", "backpack", "umbrella", "traffic light", "speaker", "watch", "tie", "trash bin can", "slippers", "bicycle", "stool", "barrel/bucket", "van", "couch", "sandals", "basket", "drum", "pen/pencil", "bus", "wild bird", "high heels", "motorcycle", "guitar", "carpet", "cell phone", "bread", "camera", "canned", "truck", "traffic cone", "cymbal", "lifesaver", "towel", "stuffed toy", "candle", "sailboat", "laptop", "awning", "bed", "faucet", "tent", "horse", "mirror", "power outlet", "sink", "apple", "air conditioner", "knife", "hockey stick", "paddle", "pickup truck", "fork", "traffic sign", "balloon", "tripod", "dog", "spoon", "clock", "pot", "cow", "cake", "dinning table", "sheep", "hanger", "blackboard/whiteboard", "napkin", "other fish", "orange/tangerine", "toiletry", "keyboard", "tomato", "lantern", "machinery vehicle", "fan", "green vegetables", "banana", "baseball glove", "airplane", "mouse", "train", "pumpkin", "soccer", "skiboard", "luggage", "nightstand", "tea pot", "telephone", "trolley", "head phone", "sports car", "stop sign", "dessert", "scooter", "stroller", "crane", "remote", "refrigerator", "oven", "lemon", "duck", "baseball bat", "surveillance camera", "cat", "jug", "broccoli", "piano", "pizza", "elephant", "skateboard", "surfboard", "gun", "skating and skiing shoes", "gas stove", "donut", "bow tie", "carrot", "toilet", "kite", "strawberry", "other balls", "shovel", "pepper", "computer box", "toilet paper", "cleaning products", "chopsticks", "microwave", "pigeon", "baseball", "cutting/chopping board", "coffee table", "side table", "scissors", "marker", "pie", "ladder", "snowboard", "cookies", "radiator", "fire hydrant", "basketball", "zebra", "grape", "giraffe", "potato", "sausage", "tricycle", "violin", "egg", "fire extinguisher", "candy", "fire truck", "billiards", "converter", "bathtub", "wheelchair", "golf club", "briefcase", "cucumber", "cigar/cigarette", "paint brush", "pear", "heavy truck", "hamburger", "extractor", "extension cord", "tong", "tennis racket", "folder", "american football", "earphone", "mask", "kettle", "tennis", "ship", "swing", "coffee machine", "slide", "carriage", "onion", "green beans", "projector", "frisbee", "washing machine/drying machine", "chicken", "printer", "watermelon", "saxophone", "tissue", "toothbrush", "ice cream", "hot-air balloon", "cello", "french fries", "scale", "trophy", "cabbage", "hot dog", "blender", "peach", "rice", "wallet/purse", "volleyball", "deer", "goose", "tape", "tablet", "cosmetics", "trumpet", "pineapple", "golf ball", "ambulance", "parking meter", "mango", "key", "hurdle", "fishing rod", "medal", "flute", "brush", "penguin", "megaphone", "corn", "lettuce", "garlic", "swan", "helicopter", "green onion", "sandwich", "nuts", "speed limit sign", "induction cooker", "broom", "trombone", "plum", "rickshaw", "goldfish", "kiwi fruit", "router/modem", "poker card", "toaster", "shrimp", "sushi", "cheese", "notepaper", "cherry", "pliers", "cd", "pasta", "hammer", "cue", "avocado", "hamimelon", "flask", "mushroom", "screwdriver", "soap", "recorder", "bear", "eggplant", "board eraser", "coconut", "tape measure/ruler", "pig", "showerhead", "globe", "chips", "steak", "crosswalk sign", "stapler", "camel", "formula 1", "pomegranate", "dishwasher", "crab", "hoverboard", "meat ball", "rice cooker", "tuba", "calculator", "papaya", "antelope", "parrot", "seal", "butterfly", "dumbbell", "donkey", "lion", "urinal", "dolphin", "electric drill", "hair dryer", "egg tart", "jellyfish", "treadmill", "lighter", "grapefruit", "game board", "mop", "radish", "baozi", "target", "french", "spring rolls", "monkey", "rabbit", "pencil case", "yak", "red cabbage", "binoculars", "asparagus", "barbell", "scallop", "noddles", "comb", "dumpling", "oyster", "table tennis paddle", "cosmetics brush/eyeliner pencil", "chainsaw", "eraser", "lobster", "durian", "okra", "lipstick", "cosmetics mirror", "curling", "table tennis" ]
jozhang97/deta-resnet-50-24-epochs
# Detection Transformers with Assignment By [Jeffrey Ouyang-Zhang](https://jozhang97.github.io/), [Jang Hyun Cho](https://sites.google.com/view/janghyuncho/), [Xingyi Zhou](https://www.cs.utexas.edu/~zhouxy/), [Philipp Krähenbühl](http://www.philkr.net/) From the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137). **TL; DR.** **De**tection **T**ransformers with **A**ssignment (DETA) re-introduce IoU assignment and NMS for transformer-based detectors. DETA trains and tests comparibly as fast as Deformable-DETR and converges much faster (50.2 mAP in 12 epochs on COCO).
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
belita/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.26.0 - Pytorch 1.10.0+cu111 - Datasets 2.9.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Mustafa21/detr-resnet-50_finetuned_cppe5
Full notebook: https://github.com/MustafaAlahmid/hugging_face_models/blob/main/detr-resnet50-cppe5.ipynb --- license: apache-2.0 tags: - generated_from_trainer datasets: - cppe-5 model-index: - name: detr-resnet-50_finetuned_cppe5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
clp/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [clp/detr-resnet-50_finetuned_cppe5](https://huggingface.co/clp/detr-resnet-50_finetuned_cppe5) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
fedehub/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
[ "no drone", "drone" ]
facebook/deformable-detr-detic
# Deformable DETR model trained using the Detic method on LVIS Deformable DEtection TRansformer (DETR), trained on LVIS (including 1203 classes). It was introduced in the paper [Detecting Twenty-thousand Classes using Image-level Supervision](https://arxiv.org/abs/2201.02605) by Zhou et al. and first released in [this repository](https://github.com/facebookresearch/Detic). This model corresponds to the "Detic_DeformDETR_R50_4x" checkpoint released in the original repository. Disclaimer: The team releasing Detic did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("facebook/deformable-detr-detic") model = DeformableDetrForObjectDetection.from_pretrained("facebook/deformable-detr-detic") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` ## Evaluation results This model achieves 32.5 box mAP and 26.2 mAP (rare classes) on LVIS. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "aerosol_can", "air_conditioner", "airplane", "alarm_clock", "alcohol", "alligator", "almond", "ambulance", "amplifier", "anklet", "antenna", "apple", "applesauce", "apricot", "apron", "aquarium", "arctic_(type_of_shoe)", "armband", "armchair", "armoire", "armor", "artichoke", "trash_can", "ashtray", "asparagus", "atomizer", "avocado", "award", "awning", "ax", "baboon", "baby_buggy", "basketball_backboard", "backpack", "handbag", "suitcase", "bagel", "bagpipe", "baguet", "bait", "ball", "ballet_skirt", "balloon", "bamboo", "banana", "band_aid", "bandage", "bandanna", "banjo", "banner", "barbell", "barge", "barrel", "barrette", "barrow", "baseball_base", "baseball", "baseball_bat", "baseball_cap", "baseball_glove", "basket", "basketball", "bass_horn", "bat_(animal)", "bath_mat", "bath_towel", "bathrobe", "bathtub", "batter_(food)", "battery", "beachball", "bead", "bean_curd", "beanbag", "beanie", "bear", "bed", "bedpan", "bedspread", "cow", "beef_(food)", "beeper", "beer_bottle", "beer_can", "beetle", "bell", "bell_pepper", "belt", "belt_buckle", "bench", "beret", "bib", "bible", "bicycle", "visor", "billboard", "binder", "binoculars", "bird", "birdfeeder", "birdbath", "birdcage", "birdhouse", "birthday_cake", "birthday_card", "pirate_flag", "black_sheep", "blackberry", "blackboard", "blanket", "blazer", "blender", "blimp", "blinker", "blouse", "blueberry", "gameboard", "boat", "bob", "bobbin", "bobby_pin", "boiled_egg", "bolo_tie", "deadbolt", "bolt", "bonnet", "book", "bookcase", "booklet", "bookmark", "boom_microphone", "boot", "bottle", "bottle_opener", "bouquet", "bow_(weapon)", "bow_(decorative_ribbons)", "bow-tie", "bowl", "pipe_bowl", "bowler_hat", "bowling_ball", "box", "boxing_glove", "suspenders", "bracelet", "brass_plaque", "brassiere", "bread-bin", "bread", "breechcloth", "bridal_gown", "briefcase", "broccoli", "broach", "broom", "brownie", "brussels_sprouts", "bubble_gum", "bucket", "horse_buggy", "bull", "bulldog", "bulldozer", "bullet_train", "bulletin_board", "bulletproof_vest", "bullhorn", "bun", "bunk_bed", "buoy", "burrito", "bus_(vehicle)", "business_card", "butter", "butterfly", "button", "cab_(taxi)", "cabana", "cabin_car", "cabinet", "locker", "cake", "calculator", "calendar", "calf", "camcorder", "camel", "camera", "camera_lens", "camper_(vehicle)", "can", "can_opener", "candle", "candle_holder", "candy_bar", "candy_cane", "walking_cane", "canister", "canoe", "cantaloup", "canteen", "cap_(headwear)", "bottle_cap", "cape", "cappuccino", "car_(automobile)", "railcar_(part_of_a_train)", "elevator_car", "car_battery", "identity_card", "card", "cardigan", "cargo_ship", "carnation", "horse_carriage", "carrot", "tote_bag", "cart", "carton", "cash_register", "casserole", "cassette", "cast", "cat", "cauliflower", "cayenne_(spice)", "cd_player", "celery", "cellular_telephone", "chain_mail", "chair", "chaise_longue", "chalice", "chandelier", "chap", "checkbook", "checkerboard", "cherry", "chessboard", "chicken_(animal)", "chickpea", "chili_(vegetable)", "chime", "chinaware", "crisp_(potato_chip)", "poker_chip", "chocolate_bar", "chocolate_cake", "chocolate_milk", "chocolate_mousse", "choker", "chopping_board", "chopstick", "christmas_tree", "slide", "cider", "cigar_box", "cigarette", "cigarette_case", "cistern", "clarinet", "clasp", "cleansing_agent", "cleat_(for_securing_rope)", "clementine", "clip", "clipboard", "clippers_(for_plants)", "cloak", "clock", "clock_tower", "clothes_hamper", "clothespin", "clutch_bag", "coaster", "coat", "coat_hanger", "coatrack", "cock", "cockroach", "cocoa_(beverage)", "coconut", "coffee_maker", "coffee_table", "coffeepot", "coil", "coin", "colander", "coleslaw", "coloring_material", "combination_lock", "pacifier", "comic_book", "compass", "computer_keyboard", "condiment", "cone", "control", "convertible_(automobile)", "sofa_bed", "cooker", "cookie", "cooking_utensil", "cooler_(for_food)", "cork_(bottle_plug)", "corkboard", "corkscrew", "edible_corn", "cornbread", "cornet", "cornice", "cornmeal", "corset", "costume", "cougar", "coverall", "cowbell", "cowboy_hat", "crab_(animal)", "crabmeat", "cracker", "crape", "crate", "crayon", "cream_pitcher", "crescent_roll", "crib", "crock_pot", "crossbar", "crouton", "crow", "crowbar", "crown", "crucifix", "cruise_ship", "police_cruiser", "crumb", "crutch", "cub_(animal)", "cube", "cucumber", "cufflink", "cup", "trophy_cup", "cupboard", "cupcake", "hair_curler", "curling_iron", "curtain", "cushion", "cylinder", "cymbal", "dagger", "dalmatian", "dartboard", "date_(fruit)", "deck_chair", "deer", "dental_floss", "desk", "detergent", "diaper", "diary", "die", "dinghy", "dining_table", "tux", "dish", "dish_antenna", "dishrag", "dishtowel", "dishwasher", "dishwasher_detergent", "dispenser", "diving_board", "dixie_cup", "dog", "dog_collar", "doll", "dollar", "dollhouse", "dolphin", "domestic_ass", "doorknob", "doormat", "doughnut", "dove", "dragonfly", "drawer", "underdrawers", "dress", "dress_hat", "dress_suit", "dresser", "drill", "drone", "dropper", "drum_(musical_instrument)", "drumstick", "duck", "duckling", "duct_tape", "duffel_bag", "dumbbell", "dumpster", "dustpan", "eagle", "earphone", "earplug", "earring", "easel", "eclair", "eel", "egg", "egg_roll", "egg_yolk", "eggbeater", "eggplant", "electric_chair", "refrigerator", "elephant", "elk", "envelope", "eraser", "escargot", "eyepatch", "falcon", "fan", "faucet", "fedora", "ferret", "ferris_wheel", "ferry", "fig_(fruit)", "fighter_jet", "figurine", "file_cabinet", "file_(tool)", "fire_alarm", "fire_engine", "fire_extinguisher", "fire_hose", "fireplace", "fireplug", "first-aid_kit", "fish", "fish_(food)", "fishbowl", "fishing_rod", "flag", "flagpole", "flamingo", "flannel", "flap", "flash", "flashlight", "fleece", "flip-flop_(sandal)", "flipper_(footwear)", "flower_arrangement", "flute_glass", "foal", "folding_chair", "food_processor", "football_(american)", "football_helmet", "footstool", "fork", "forklift", "freight_car", "french_toast", "freshener", "frisbee", "frog", "fruit_juice", "frying_pan", "fudge", "funnel", "futon", "gag", "garbage", "garbage_truck", "garden_hose", "gargle", "gargoyle", "garlic", "gasmask", "gazelle", "gelatin", "gemstone", "generator", "giant_panda", "gift_wrap", "ginger", "giraffe", "cincture", "glass_(drink_container)", "globe", "glove", "goat", "goggles", "goldfish", "golf_club", "golfcart", "gondola_(boat)", "goose", "gorilla", "gourd", "grape", "grater", "gravestone", "gravy_boat", "green_bean", "green_onion", "griddle", "grill", "grits", "grizzly", "grocery_bag", "guitar", "gull", "gun", "hairbrush", "hairnet", "hairpin", "halter_top", "ham", "hamburger", "hammer", "hammock", "hamper", "hamster", "hair_dryer", "hand_glass", "hand_towel", "handcart", "handcuff", "handkerchief", "handle", "handsaw", "hardback_book", "harmonium", "hat", "hatbox", "veil", "headband", "headboard", "headlight", "headscarf", "headset", "headstall_(for_horses)", "heart", "heater", "helicopter", "helmet", "heron", "highchair", "hinge", "hippopotamus", "hockey_stick", "hog", "home_plate_(baseball)", "honey", "fume_hood", "hook", "hookah", "hornet", "horse", "hose", "hot-air_balloon", "hotplate", "hot_sauce", "hourglass", "houseboat", "hummingbird", "hummus", "polar_bear", "icecream", "popsicle", "ice_maker", "ice_pack", "ice_skate", "igniter", "inhaler", "ipod", "iron_(for_clothing)", "ironing_board", "jacket", "jam", "jar", "jean", "jeep", "jelly_bean", "jersey", "jet_plane", "jewel", "jewelry", "joystick", "jumpsuit", "kayak", "keg", "kennel", "kettle", "key", "keycard", "kilt", "kimono", "kitchen_sink", "kitchen_table", "kite", "kitten", "kiwi_fruit", "knee_pad", "knife", "knitting_needle", "knob", "knocker_(on_a_door)", "koala", "lab_coat", "ladder", "ladle", "ladybug", "lamb_(animal)", "lamb-chop", "lamp", "lamppost", "lampshade", "lantern", "lanyard", "laptop_computer", "lasagna", "latch", "lawn_mower", "leather", "legging_(clothing)", "lego", "legume", "lemon", "lemonade", "lettuce", "license_plate", "life_buoy", "life_jacket", "lightbulb", "lightning_rod", "lime", "limousine", "lion", "lip_balm", "liquor", "lizard", "log", "lollipop", "speaker_(stero_equipment)", "loveseat", "machine_gun", "magazine", "magnet", "mail_slot", "mailbox_(at_home)", "mallard", "mallet", "mammoth", "manatee", "mandarin_orange", "manger", "manhole", "map", "marker", "martini", "mascot", "mashed_potato", "masher", "mask", "mast", "mat_(gym_equipment)", "matchbox", "mattress", "measuring_cup", "measuring_stick", "meatball", "medicine", "melon", "microphone", "microscope", "microwave_oven", "milestone", "milk", "milk_can", "milkshake", "minivan", "mint_candy", "mirror", "mitten", "mixer_(kitchen_tool)", "money", "monitor_(computer_equipment) computer_monitor", "monkey", "motor", "motor_scooter", "motor_vehicle", "motorcycle", "mound_(baseball)", "mouse_(computer_equipment)", "mousepad", "muffin", "mug", "mushroom", "music_stool", "musical_instrument", "nailfile", "napkin", "neckerchief", "necklace", "necktie", "needle", "nest", "newspaper", "newsstand", "nightshirt", "nosebag_(for_animals)", "noseband_(for_animals)", "notebook", "notepad", "nut", "nutcracker", "oar", "octopus_(food)", "octopus_(animal)", "oil_lamp", "olive_oil", "omelet", "onion", "orange_(fruit)", "orange_juice", "ostrich", "ottoman", "oven", "overalls_(clothing)", "owl", "packet", "inkpad", "pad", "paddle", "padlock", "paintbrush", "painting", "pajamas", "palette", "pan_(for_cooking)", "pan_(metal_container)", "pancake", "pantyhose", "papaya", "paper_plate", "paper_towel", "paperback_book", "paperweight", "parachute", "parakeet", "parasail_(sports)", "parasol", "parchment", "parka", "parking_meter", "parrot", "passenger_car_(part_of_a_train)", "passenger_ship", "passport", "pastry", "patty_(food)", "pea_(food)", "peach", "peanut_butter", "pear", "peeler_(tool_for_fruit_and_vegetables)", "wooden_leg", "pegboard", "pelican", "pen", "pencil", "pencil_box", "pencil_sharpener", "pendulum", "penguin", "pennant", "penny_(coin)", "pepper", "pepper_mill", "perfume", "persimmon", "person", "pet", "pew_(church_bench)", "phonebook", "phonograph_record", "piano", "pickle", "pickup_truck", "pie", "pigeon", "piggy_bank", "pillow", "pin_(non_jewelry)", "pineapple", "pinecone", "ping-pong_ball", "pinwheel", "tobacco_pipe", "pipe", "pistol", "pita_(bread)", "pitcher_(vessel_for_liquid)", "pitchfork", "pizza", "place_mat", "plate", "platter", "playpen", "pliers", "plow_(farm_equipment)", "plume", "pocket_watch", "pocketknife", "poker_(fire_stirring_tool)", "pole", "polo_shirt", "poncho", "pony", "pool_table", "pop_(soda)", "postbox_(public)", "postcard", "poster", "pot", "flowerpot", "potato", "potholder", "pottery", "pouch", "power_shovel", "prawn", "pretzel", "printer", "projectile_(weapon)", "projector", "propeller", "prune", "pudding", "puffer_(fish)", "puffin", "pug-dog", "pumpkin", "puncher", "puppet", "puppy", "quesadilla", "quiche", "quilt", "rabbit", "race_car", "racket", "radar", "radiator", "radio_receiver", "radish", "raft", "rag_doll", "raincoat", "ram_(animal)", "raspberry", "rat", "razorblade", "reamer_(juicer)", "rearview_mirror", "receipt", "recliner", "record_player", "reflector", "remote_control", "rhinoceros", "rib_(food)", "rifle", "ring", "river_boat", "road_map", "robe", "rocking_chair", "rodent", "roller_skate", "rollerblade", "rolling_pin", "root_beer", "router_(computer_equipment)", "rubber_band", "runner_(carpet)", "plastic_bag", "saddle_(on_an_animal)", "saddle_blanket", "saddlebag", "safety_pin", "sail", "salad", "salad_plate", "salami", "salmon_(fish)", "salmon_(food)", "salsa", "saltshaker", "sandal_(type_of_shoe)", "sandwich", "satchel", "saucepan", "saucer", "sausage", "sawhorse", "saxophone", "scale_(measuring_instrument)", "scarecrow", "scarf", "school_bus", "scissors", "scoreboard", "scraper", "screwdriver", "scrubbing_brush", "sculpture", "seabird", "seahorse", "seaplane", "seashell", "sewing_machine", "shaker", "shampoo", "shark", "sharpener", "sharpie", "shaver_(electric)", "shaving_cream", "shawl", "shears", "sheep", "shepherd_dog", "sherbert", "shield", "shirt", "shoe", "shopping_bag", "shopping_cart", "short_pants", "shot_glass", "shoulder_bag", "shovel", "shower_head", "shower_cap", "shower_curtain", "shredder_(for_paper)", "signboard", "silo", "sink", "skateboard", "skewer", "ski", "ski_boot", "ski_parka", "ski_pole", "skirt", "skullcap", "sled", "sleeping_bag", "sling_(bandage)", "slipper_(footwear)", "smoothie", "snake", "snowboard", "snowman", "snowmobile", "soap", "soccer_ball", "sock", "sofa", "softball", "solar_array", "sombrero", "soup", "soup_bowl", "soupspoon", "sour_cream", "soya_milk", "space_shuttle", "sparkler_(fireworks)", "spatula", "spear", "spectacles", "spice_rack", "spider", "crawfish", "sponge", "spoon", "sportswear", "spotlight", "squid_(food)", "squirrel", "stagecoach", "stapler_(stapling_machine)", "starfish", "statue_(sculpture)", "steak_(food)", "steak_knife", "steering_wheel", "stepladder", "step_stool", "stereo_(sound_system)", "stew", "stirrer", "stirrup", "stool", "stop_sign", "brake_light", "stove", "strainer", "strap", "straw_(for_drinking)", "strawberry", "street_sign", "streetlight", "string_cheese", "stylus", "subwoofer", "sugar_bowl", "sugarcane_(plant)", "suit_(clothing)", "sunflower", "sunglasses", "sunhat", "surfboard", "sushi", "mop", "sweat_pants", "sweatband", "sweater", "sweatshirt", "sweet_potato", "swimsuit", "sword", "syringe", "tabasco_sauce", "table-tennis_table", "table", "table_lamp", "tablecloth", "tachometer", "taco", "tag", "taillight", "tambourine", "army_tank", "tank_(storage_vessel)", "tank_top_(clothing)", "tape_(sticky_cloth_or_paper)", "tape_measure", "tapestry", "tarp", "tartan", "tassel", "tea_bag", "teacup", "teakettle", "teapot", "teddy_bear", "telephone", "telephone_booth", "telephone_pole", "telephoto_lens", "television_camera", "television_set", "tennis_ball", "tennis_racket", "tequila", "thermometer", "thermos_bottle", "thermostat", "thimble", "thread", "thumbtack", "tiara", "tiger", "tights_(clothing)", "timer", "tinfoil", "tinsel", "tissue_paper", "toast_(food)", "toaster", "toaster_oven", "toilet", "toilet_tissue", "tomato", "tongs", "toolbox", "toothbrush", "toothpaste", "toothpick", "cover", "tortilla", "tow_truck", "towel", "towel_rack", "toy", "tractor_(farm_equipment)", "traffic_light", "dirt_bike", "trailer_truck", "train_(railroad_vehicle)", "trampoline", "tray", "trench_coat", "triangle_(musical_instrument)", "tricycle", "tripod", "trousers", "truck", "truffle_(chocolate)", "trunk", "vat", "turban", "turkey_(food)", "turnip", "turtle", "turtleneck_(clothing)", "typewriter", "umbrella", "underwear", "unicycle", "urinal", "urn", "vacuum_cleaner", "vase", "vending_machine", "vent", "vest", "videotape", "vinegar", "violin", "vodka", "volleyball", "vulture", "waffle", "waffle_iron", "wagon", "wagon_wheel", "walking_stick", "wall_clock", "wall_socket", "wallet", "walrus", "wardrobe", "washbasin", "automatic_washer", "watch", "water_bottle", "water_cooler", "water_faucet", "water_heater", "water_jug", "water_gun", "water_scooter", "water_ski", "water_tower", "watering_can", "watermelon", "weathervane", "webcam", "wedding_cake", "wedding_ring", "wet_suit", "wheel", "wheelchair", "whipped_cream", "whistle", "wig", "wind_chime", "windmill", "window_box_(for_plants)", "windshield_wiper", "windsock", "wine_bottle", "wine_bucket", "wineglass", "blinder_(for_horses)", "wok", "wolf", "wooden_spoon", "wreath", "wrench", "wristband", "wristlet", "yacht", "yogurt", "yoke_(animal_equipment)", "zebra", "zucchini" ]
facebook/deformable-detr-box-supervised
# Deformable DETR model trained on LVIS Deformable DEtection TRansformer (DETR), trained on LVIS (including 1203 classes). It was introduced in the paper [Detecting Twenty-thousand Classes using Image-level Supervision](https://arxiv.org/abs/2201.02605) by Zhou et al. and first released in [this repository](https://github.com/facebookresearch/Detic). This model corresponds to the "Box-Supervised_DeformDETR_R50_4x" checkpoint released in the original repository. Disclaimer: The team releasing Detic did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("facebook/deformable-detr-box-supervised") model = DeformableDetrForObjectDetection.from_pretrained("facebook/deformable-detr-box-supervised") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` ## Evaluation results This model achieves 31.7 box mAP and 21.4 mAP (rare classes) on LVIS. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
[ "aerosol_can", "air_conditioner", "airplane", "alarm_clock", "alcohol", "alligator", "almond", "ambulance", "amplifier", "anklet", "antenna", "apple", "applesauce", "apricot", "apron", "aquarium", "arctic_(type_of_shoe)", "armband", "armchair", "armoire", "armor", "artichoke", "trash_can", "ashtray", "asparagus", "atomizer", "avocado", "award", "awning", "ax", "baboon", "baby_buggy", "basketball_backboard", "backpack", "handbag", "suitcase", "bagel", "bagpipe", "baguet", "bait", "ball", "ballet_skirt", "balloon", "bamboo", "banana", "band_aid", "bandage", "bandanna", "banjo", "banner", "barbell", "barge", "barrel", "barrette", "barrow", "baseball_base", "baseball", "baseball_bat", "baseball_cap", "baseball_glove", "basket", "basketball", "bass_horn", "bat_(animal)", "bath_mat", "bath_towel", "bathrobe", "bathtub", "batter_(food)", "battery", "beachball", "bead", "bean_curd", "beanbag", "beanie", "bear", "bed", "bedpan", "bedspread", "cow", "beef_(food)", "beeper", "beer_bottle", "beer_can", "beetle", "bell", "bell_pepper", "belt", "belt_buckle", "bench", "beret", "bib", "bible", "bicycle", "visor", "billboard", "binder", "binoculars", "bird", "birdfeeder", "birdbath", "birdcage", "birdhouse", "birthday_cake", "birthday_card", "pirate_flag", "black_sheep", "blackberry", "blackboard", "blanket", "blazer", "blender", "blimp", "blinker", "blouse", "blueberry", "gameboard", "boat", "bob", "bobbin", "bobby_pin", "boiled_egg", "bolo_tie", "deadbolt", "bolt", "bonnet", "book", "bookcase", "booklet", "bookmark", "boom_microphone", "boot", "bottle", "bottle_opener", "bouquet", "bow_(weapon)", "bow_(decorative_ribbons)", "bow-tie", "bowl", "pipe_bowl", "bowler_hat", "bowling_ball", "box", "boxing_glove", "suspenders", "bracelet", "brass_plaque", "brassiere", "bread-bin", "bread", "breechcloth", "bridal_gown", "briefcase", "broccoli", "broach", "broom", "brownie", "brussels_sprouts", "bubble_gum", "bucket", "horse_buggy", "bull", "bulldog", "bulldozer", "bullet_train", "bulletin_board", "bulletproof_vest", "bullhorn", "bun", "bunk_bed", "buoy", "burrito", "bus_(vehicle)", "business_card", "butter", "butterfly", "button", "cab_(taxi)", "cabana", "cabin_car", "cabinet", "locker", "cake", "calculator", "calendar", "calf", "camcorder", "camel", "camera", "camera_lens", "camper_(vehicle)", "can", "can_opener", "candle", "candle_holder", "candy_bar", "candy_cane", "walking_cane", "canister", "canoe", "cantaloup", "canteen", "cap_(headwear)", "bottle_cap", "cape", "cappuccino", "car_(automobile)", "railcar_(part_of_a_train)", "elevator_car", "car_battery", "identity_card", "card", "cardigan", "cargo_ship", "carnation", "horse_carriage", "carrot", "tote_bag", "cart", "carton", "cash_register", "casserole", "cassette", "cast", "cat", "cauliflower", "cayenne_(spice)", "cd_player", "celery", "cellular_telephone", "chain_mail", "chair", "chaise_longue", "chalice", "chandelier", "chap", "checkbook", "checkerboard", "cherry", "chessboard", "chicken_(animal)", "chickpea", "chili_(vegetable)", "chime", "chinaware", "crisp_(potato_chip)", "poker_chip", "chocolate_bar", "chocolate_cake", "chocolate_milk", "chocolate_mousse", "choker", "chopping_board", "chopstick", "christmas_tree", "slide", "cider", "cigar_box", "cigarette", "cigarette_case", "cistern", "clarinet", "clasp", "cleansing_agent", "cleat_(for_securing_rope)", "clementine", "clip", "clipboard", "clippers_(for_plants)", "cloak", "clock", "clock_tower", "clothes_hamper", "clothespin", "clutch_bag", "coaster", "coat", "coat_hanger", "coatrack", "cock", "cockroach", "cocoa_(beverage)", "coconut", "coffee_maker", "coffee_table", "coffeepot", "coil", "coin", "colander", "coleslaw", "coloring_material", "combination_lock", "pacifier", "comic_book", "compass", "computer_keyboard", "condiment", "cone", "control", "convertible_(automobile)", "sofa_bed", "cooker", "cookie", "cooking_utensil", "cooler_(for_food)", "cork_(bottle_plug)", "corkboard", "corkscrew", "edible_corn", "cornbread", "cornet", "cornice", "cornmeal", "corset", "costume", "cougar", "coverall", "cowbell", "cowboy_hat", "crab_(animal)", "crabmeat", "cracker", "crape", "crate", "crayon", "cream_pitcher", "crescent_roll", "crib", "crock_pot", "crossbar", "crouton", "crow", "crowbar", "crown", "crucifix", "cruise_ship", "police_cruiser", "crumb", "crutch", "cub_(animal)", "cube", "cucumber", "cufflink", "cup", "trophy_cup", "cupboard", "cupcake", "hair_curler", "curling_iron", "curtain", "cushion", "cylinder", "cymbal", "dagger", "dalmatian", "dartboard", "date_(fruit)", "deck_chair", "deer", "dental_floss", "desk", "detergent", "diaper", "diary", "die", "dinghy", "dining_table", "tux", "dish", "dish_antenna", "dishrag", "dishtowel", "dishwasher", "dishwasher_detergent", "dispenser", "diving_board", "dixie_cup", "dog", "dog_collar", "doll", "dollar", "dollhouse", "dolphin", "domestic_ass", "doorknob", "doormat", "doughnut", "dove", "dragonfly", "drawer", "underdrawers", "dress", "dress_hat", "dress_suit", "dresser", "drill", "drone", "dropper", "drum_(musical_instrument)", "drumstick", "duck", "duckling", "duct_tape", "duffel_bag", "dumbbell", "dumpster", "dustpan", "eagle", "earphone", "earplug", "earring", "easel", "eclair", "eel", "egg", "egg_roll", "egg_yolk", "eggbeater", "eggplant", "electric_chair", "refrigerator", "elephant", "elk", "envelope", "eraser", "escargot", "eyepatch", "falcon", "fan", "faucet", "fedora", "ferret", "ferris_wheel", "ferry", "fig_(fruit)", "fighter_jet", "figurine", "file_cabinet", "file_(tool)", "fire_alarm", "fire_engine", "fire_extinguisher", "fire_hose", "fireplace", "fireplug", "first-aid_kit", "fish", "fish_(food)", "fishbowl", "fishing_rod", "flag", "flagpole", "flamingo", "flannel", "flap", "flash", "flashlight", "fleece", "flip-flop_(sandal)", "flipper_(footwear)", "flower_arrangement", "flute_glass", "foal", "folding_chair", "food_processor", "football_(american)", "football_helmet", "footstool", "fork", "forklift", "freight_car", "french_toast", "freshener", "frisbee", "frog", "fruit_juice", "frying_pan", "fudge", "funnel", "futon", "gag", "garbage", "garbage_truck", "garden_hose", "gargle", "gargoyle", "garlic", "gasmask", "gazelle", "gelatin", "gemstone", "generator", "giant_panda", "gift_wrap", "ginger", "giraffe", "cincture", "glass_(drink_container)", "globe", "glove", "goat", "goggles", "goldfish", "golf_club", "golfcart", "gondola_(boat)", "goose", "gorilla", "gourd", "grape", "grater", "gravestone", "gravy_boat", "green_bean", "green_onion", "griddle", "grill", "grits", "grizzly", "grocery_bag", "guitar", "gull", "gun", "hairbrush", "hairnet", "hairpin", "halter_top", "ham", "hamburger", "hammer", "hammock", "hamper", "hamster", "hair_dryer", "hand_glass", "hand_towel", "handcart", "handcuff", "handkerchief", "handle", "handsaw", "hardback_book", "harmonium", "hat", "hatbox", "veil", "headband", "headboard", "headlight", "headscarf", "headset", "headstall_(for_horses)", "heart", "heater", "helicopter", "helmet", "heron", "highchair", "hinge", "hippopotamus", "hockey_stick", "hog", "home_plate_(baseball)", "honey", "fume_hood", "hook", "hookah", "hornet", "horse", "hose", "hot-air_balloon", "hotplate", "hot_sauce", "hourglass", "houseboat", "hummingbird", "hummus", "polar_bear", "icecream", "popsicle", "ice_maker", "ice_pack", "ice_skate", "igniter", "inhaler", "ipod", "iron_(for_clothing)", "ironing_board", "jacket", "jam", "jar", "jean", "jeep", "jelly_bean", "jersey", "jet_plane", "jewel", "jewelry", "joystick", "jumpsuit", "kayak", "keg", "kennel", "kettle", "key", "keycard", "kilt", "kimono", "kitchen_sink", "kitchen_table", "kite", "kitten", "kiwi_fruit", "knee_pad", "knife", "knitting_needle", "knob", "knocker_(on_a_door)", "koala", "lab_coat", "ladder", "ladle", "ladybug", "lamb_(animal)", "lamb-chop", "lamp", "lamppost", "lampshade", "lantern", "lanyard", "laptop_computer", "lasagna", "latch", "lawn_mower", "leather", "legging_(clothing)", "lego", "legume", "lemon", "lemonade", "lettuce", "license_plate", "life_buoy", "life_jacket", "lightbulb", "lightning_rod", "lime", "limousine", "lion", "lip_balm", "liquor", "lizard", "log", "lollipop", "speaker_(stero_equipment)", "loveseat", "machine_gun", "magazine", "magnet", "mail_slot", "mailbox_(at_home)", "mallard", "mallet", "mammoth", "manatee", "mandarin_orange", "manger", "manhole", "map", "marker", "martini", "mascot", "mashed_potato", "masher", "mask", "mast", "mat_(gym_equipment)", "matchbox", "mattress", "measuring_cup", "measuring_stick", "meatball", "medicine", "melon", "microphone", "microscope", "microwave_oven", "milestone", "milk", "milk_can", "milkshake", "minivan", "mint_candy", "mirror", "mitten", "mixer_(kitchen_tool)", "money", "monitor_(computer_equipment) computer_monitor", "monkey", "motor", "motor_scooter", "motor_vehicle", "motorcycle", "mound_(baseball)", "mouse_(computer_equipment)", "mousepad", "muffin", "mug", "mushroom", "music_stool", "musical_instrument", "nailfile", "napkin", "neckerchief", "necklace", "necktie", "needle", "nest", "newspaper", "newsstand", "nightshirt", "nosebag_(for_animals)", "noseband_(for_animals)", "notebook", "notepad", "nut", "nutcracker", "oar", "octopus_(food)", "octopus_(animal)", "oil_lamp", "olive_oil", "omelet", "onion", "orange_(fruit)", "orange_juice", "ostrich", "ottoman", "oven", "overalls_(clothing)", "owl", "packet", "inkpad", "pad", "paddle", "padlock", "paintbrush", "painting", "pajamas", "palette", "pan_(for_cooking)", "pan_(metal_container)", "pancake", "pantyhose", "papaya", "paper_plate", "paper_towel", "paperback_book", "paperweight", "parachute", "parakeet", "parasail_(sports)", "parasol", "parchment", "parka", "parking_meter", "parrot", "passenger_car_(part_of_a_train)", "passenger_ship", "passport", "pastry", "patty_(food)", "pea_(food)", "peach", "peanut_butter", "pear", "peeler_(tool_for_fruit_and_vegetables)", "wooden_leg", "pegboard", "pelican", "pen", "pencil", "pencil_box", "pencil_sharpener", "pendulum", "penguin", "pennant", "penny_(coin)", "pepper", "pepper_mill", "perfume", "persimmon", "person", "pet", "pew_(church_bench)", "phonebook", "phonograph_record", "piano", "pickle", "pickup_truck", "pie", "pigeon", "piggy_bank", "pillow", "pin_(non_jewelry)", "pineapple", "pinecone", "ping-pong_ball", "pinwheel", "tobacco_pipe", "pipe", "pistol", "pita_(bread)", "pitcher_(vessel_for_liquid)", "pitchfork", "pizza", "place_mat", "plate", "platter", "playpen", "pliers", "plow_(farm_equipment)", "plume", "pocket_watch", "pocketknife", "poker_(fire_stirring_tool)", "pole", "polo_shirt", "poncho", "pony", "pool_table", "pop_(soda)", "postbox_(public)", "postcard", "poster", "pot", "flowerpot", "potato", "potholder", "pottery", "pouch", "power_shovel", "prawn", "pretzel", "printer", "projectile_(weapon)", "projector", "propeller", "prune", "pudding", "puffer_(fish)", "puffin", "pug-dog", "pumpkin", "puncher", "puppet", "puppy", "quesadilla", "quiche", "quilt", "rabbit", "race_car", "racket", "radar", "radiator", "radio_receiver", "radish", "raft", "rag_doll", "raincoat", "ram_(animal)", "raspberry", "rat", "razorblade", "reamer_(juicer)", "rearview_mirror", "receipt", "recliner", "record_player", "reflector", "remote_control", "rhinoceros", "rib_(food)", "rifle", "ring", "river_boat", "road_map", "robe", "rocking_chair", "rodent", "roller_skate", "rollerblade", "rolling_pin", "root_beer", "router_(computer_equipment)", "rubber_band", "runner_(carpet)", "plastic_bag", "saddle_(on_an_animal)", "saddle_blanket", "saddlebag", "safety_pin", "sail", "salad", "salad_plate", "salami", "salmon_(fish)", "salmon_(food)", "salsa", "saltshaker", "sandal_(type_of_shoe)", "sandwich", "satchel", "saucepan", "saucer", "sausage", "sawhorse", "saxophone", "scale_(measuring_instrument)", "scarecrow", "scarf", "school_bus", "scissors", "scoreboard", "scraper", "screwdriver", "scrubbing_brush", "sculpture", "seabird", "seahorse", "seaplane", "seashell", "sewing_machine", "shaker", "shampoo", "shark", "sharpener", "sharpie", "shaver_(electric)", "shaving_cream", "shawl", "shears", "sheep", "shepherd_dog", "sherbert", "shield", "shirt", "shoe", "shopping_bag", "shopping_cart", "short_pants", "shot_glass", "shoulder_bag", "shovel", "shower_head", "shower_cap", "shower_curtain", "shredder_(for_paper)", "signboard", "silo", "sink", "skateboard", "skewer", "ski", "ski_boot", "ski_parka", "ski_pole", "skirt", "skullcap", "sled", "sleeping_bag", "sling_(bandage)", "slipper_(footwear)", "smoothie", "snake", "snowboard", "snowman", "snowmobile", "soap", "soccer_ball", "sock", "sofa", "softball", "solar_array", "sombrero", "soup", "soup_bowl", "soupspoon", "sour_cream", "soya_milk", "space_shuttle", "sparkler_(fireworks)", "spatula", "spear", "spectacles", "spice_rack", "spider", "crawfish", "sponge", "spoon", "sportswear", "spotlight", "squid_(food)", "squirrel", "stagecoach", "stapler_(stapling_machine)", "starfish", "statue_(sculpture)", "steak_(food)", "steak_knife", "steering_wheel", "stepladder", "step_stool", "stereo_(sound_system)", "stew", "stirrer", "stirrup", "stool", "stop_sign", "brake_light", "stove", "strainer", "strap", "straw_(for_drinking)", "strawberry", "street_sign", "streetlight", "string_cheese", "stylus", "subwoofer", "sugar_bowl", "sugarcane_(plant)", "suit_(clothing)", "sunflower", "sunglasses", "sunhat", "surfboard", "sushi", "mop", "sweat_pants", "sweatband", "sweater", "sweatshirt", "sweet_potato", "swimsuit", "sword", "syringe", "tabasco_sauce", "table-tennis_table", "table", "table_lamp", "tablecloth", "tachometer", "taco", "tag", "taillight", "tambourine", "army_tank", "tank_(storage_vessel)", "tank_top_(clothing)", "tape_(sticky_cloth_or_paper)", "tape_measure", "tapestry", "tarp", "tartan", "tassel", "tea_bag", "teacup", "teakettle", "teapot", "teddy_bear", "telephone", "telephone_booth", "telephone_pole", "telephoto_lens", "television_camera", "television_set", "tennis_ball", "tennis_racket", "tequila", "thermometer", "thermos_bottle", "thermostat", "thimble", "thread", "thumbtack", "tiara", "tiger", "tights_(clothing)", "timer", "tinfoil", "tinsel", "tissue_paper", "toast_(food)", "toaster", "toaster_oven", "toilet", "toilet_tissue", "tomato", "tongs", "toolbox", "toothbrush", "toothpaste", "toothpick", "cover", "tortilla", "tow_truck", "towel", "towel_rack", "toy", "tractor_(farm_equipment)", "traffic_light", "dirt_bike", "trailer_truck", "train_(railroad_vehicle)", "trampoline", "tray", "trench_coat", "triangle_(musical_instrument)", "tricycle", "tripod", "trousers", "truck", "truffle_(chocolate)", "trunk", "vat", "turban", "turkey_(food)", "turnip", "turtle", "turtleneck_(clothing)", "typewriter", "umbrella", "underwear", "unicycle", "urinal", "urn", "vacuum_cleaner", "vase", "vending_machine", "vent", "vest", "videotape", "vinegar", "violin", "vodka", "volleyball", "vulture", "waffle", "waffle_iron", "wagon", "wagon_wheel", "walking_stick", "wall_clock", "wall_socket", "wallet", "walrus", "wardrobe", "washbasin", "automatic_washer", "watch", "water_bottle", "water_cooler", "water_faucet", "water_heater", "water_jug", "water_gun", "water_scooter", "water_ski", "water_tower", "watering_can", "watermelon", "weathervane", "webcam", "wedding_cake", "wedding_ring", "wet_suit", "wheel", "wheelchair", "whipped_cream", "whistle", "wig", "wind_chime", "windmill", "window_box_(for_plants)", "windshield_wiper", "windsock", "wine_bottle", "wine_bucket", "wineglass", "blinder_(for_horses)", "wok", "wolf", "wooden_spoon", "wreath", "wrench", "wristband", "wristlet", "yacht", "yogurt", "yoke_(animal_equipment)", "zebra", "zucchini" ]
davanstrien/detr-resnet-50_find_tuned_beyond_words
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_find_tuned_beyond_words This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the beyond_words_23 dataset. It achieves the following results on the evaluation set: - Loss: 0.9310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7439 | 0.56 | 100 | 2.2690 | | 1.7644 | 1.12 | 200 | 1.5053 | | 1.557 | 1.69 | 300 | 1.3136 | | 1.3207 | 2.25 | 400 | 1.2063 | | 1.3705 | 2.81 | 500 | 1.2007 | | 1.1924 | 3.37 | 600 | 1.2704 | | 1.2604 | 3.93 | 700 | 1.1784 | | 1.1982 | 4.49 | 800 | 1.1167 | | 1.1912 | 5.06 | 900 | 1.1562 | | 1.1206 | 5.62 | 1000 | 1.2124 | | 1.1344 | 6.18 | 1100 | 1.0622 | | 1.1388 | 6.74 | 1200 | 1.0425 | | 1.0124 | 7.3 | 1300 | 0.9908 | | 1.0776 | 7.87 | 1400 | 1.1182 | | 0.9614 | 8.43 | 1500 | 0.9967 | | 1.0136 | 8.99 | 1600 | 0.8933 | | 1.0206 | 9.55 | 1700 | 0.9354 | | 0.9529 | 10.11 | 1800 | 0.9751 | | 1.0126 | 10.67 | 1900 | 0.9310 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.0 - Tokenizers 0.13.2
[ "photograph", "illustration", "map", "comics/cartoon", "editorial cartoon", "headline", "advertisement" ]
davanstrien/conditional-detr-resnet-50_fine_tuned_beyond_words
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # conditional-detr-resnet-50_fine_tuned_beyond_words This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the loc_beyond_words dataset. It achieves the following results on the evaluation set: - Loss: 0.5892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.674 | 0.28 | 100 | 1.7571 | | 1.4721 | 0.56 | 200 | 1.2737 | | 1.2557 | 0.84 | 300 | 1.1037 | | 1.0781 | 1.12 | 400 | 1.0184 | | 1.0353 | 1.4 | 500 | 0.9988 | | 1.0324 | 1.69 | 600 | 0.9951 | | 0.9131 | 1.97 | 700 | 0.9224 | | 0.8724 | 2.25 | 800 | 0.9692 | | 0.8129 | 2.53 | 900 | 0.8670 | | 0.9 | 2.81 | 1000 | 0.8326 | | 0.7993 | 3.09 | 1100 | 0.7875 | | 0.7907 | 3.37 | 1200 | 0.7517 | | 0.8424 | 3.65 | 1300 | 0.9088 | | 0.7808 | 3.93 | 1400 | 0.8506 | | 0.7469 | 4.21 | 1500 | 0.7928 | | 0.7582 | 4.49 | 1600 | 0.7228 | | 0.7546 | 4.78 | 1700 | 0.7588 | | 0.7842 | 5.06 | 1800 | 0.7726 | | 0.775 | 5.34 | 1900 | 0.7676 | | 0.7263 | 5.62 | 2000 | 0.7164 | | 0.7209 | 5.9 | 2100 | 0.7061 | | 0.7259 | 6.18 | 2200 | 0.7579 | | 0.7701 | 6.46 | 2300 | 0.8184 | | 0.7391 | 6.74 | 2400 | 0.6684 | | 0.6834 | 7.02 | 2500 | 0.7042 | | 0.7098 | 7.3 | 2600 | 0.7166 | | 0.7498 | 7.58 | 2700 | 0.6752 | | 0.7056 | 7.87 | 2800 | 0.7064 | | 0.7004 | 8.15 | 2900 | 0.7090 | | 0.6964 | 8.43 | 3000 | 0.7318 | | 0.682 | 8.71 | 3100 | 0.7216 | | 0.7309 | 8.99 | 3200 | 0.6545 | | 0.6576 | 9.27 | 3300 | 0.6478 | | 0.7014 | 9.55 | 3400 | 0.6814 | | 0.673 | 9.83 | 3500 | 0.6783 | | 0.6455 | 10.11 | 3600 | 0.7248 | | 0.7041 | 10.39 | 3700 | 0.7729 | | 0.6664 | 10.67 | 3800 | 0.6746 | | 0.6161 | 10.96 | 3900 | 0.6414 | | 0.6975 | 11.24 | 4000 | 0.6637 | | 0.6751 | 11.52 | 4100 | 0.6570 | | 0.6092 | 11.8 | 4200 | 0.6691 | | 0.6593 | 12.08 | 4300 | 0.6276 | | 0.6449 | 12.36 | 4400 | 0.6388 | | 0.6136 | 12.64 | 4500 | 0.6711 | | 0.6521 | 12.92 | 4600 | 0.6768 | | 0.6162 | 13.2 | 4700 | 0.6427 | | 0.7083 | 13.48 | 4800 | 0.6492 | | 0.6407 | 13.76 | 4900 | 0.6213 | | 0.6371 | 14.04 | 5000 | 0.6674 | | 0.626 | 14.33 | 5100 | 0.6185 | | 0.6442 | 14.61 | 5200 | 0.7180 | | 0.5981 | 14.89 | 5300 | 0.6441 | | 0.629 | 15.17 | 5400 | 0.6262 | | 0.625 | 15.45 | 5500 | 0.6397 | | 0.6123 | 15.73 | 5600 | 0.6440 | | 0.6084 | 16.01 | 5700 | 0.6493 | | 0.6021 | 16.29 | 5800 | 0.6263 | | 0.6502 | 16.57 | 5900 | 0.6254 | | 0.6339 | 16.85 | 6000 | 0.7043 | | 0.5925 | 17.13 | 6100 | 0.8014 | | 0.6453 | 17.42 | 6200 | 0.6385 | | 0.6143 | 17.7 | 6300 | 0.6033 | | 0.6057 | 17.98 | 6400 | 0.6881 | | 0.6386 | 18.26 | 6500 | 0.6366 | | 0.5839 | 18.54 | 6600 | 0.6563 | | 0.6013 | 18.82 | 6700 | 0.5982 | | 0.5999 | 19.1 | 6800 | 0.6064 | | 0.6023 | 19.38 | 6900 | 0.5795 | | 0.5593 | 19.66 | 7000 | 0.6538 | | 0.6375 | 19.94 | 7100 | 0.6991 | | 0.6073 | 20.22 | 7200 | 0.7117 | | 0.596 | 20.51 | 7300 | 0.6034 | | 0.5987 | 20.79 | 7400 | 0.6489 | | 0.5922 | 21.07 | 7500 | 0.6216 | | 0.589 | 21.35 | 7600 | 0.6257 | | 0.6047 | 21.63 | 7700 | 0.6415 | | 0.5775 | 21.91 | 7800 | 0.6159 | | 0.588 | 22.19 | 7900 | 0.6095 | | 0.5844 | 22.47 | 8000 | 0.6373 | | 0.5964 | 22.75 | 8100 | 0.6022 | | 0.5987 | 23.03 | 8200 | 0.6050 | | 0.5605 | 23.31 | 8300 | 0.6083 | | 0.5835 | 23.6 | 8400 | 0.7823 | | 0.5816 | 23.88 | 8500 | 0.6417 | | 0.5757 | 24.16 | 8600 | 0.6324 | | 0.5997 | 24.44 | 8700 | 0.6046 | | 0.5674 | 24.72 | 8800 | 0.6558 | | 0.5703 | 25.0 | 8900 | 0.5819 | | 0.5766 | 25.28 | 9000 | 0.6116 | | 0.5548 | 25.56 | 9100 | 0.5877 | | 0.564 | 25.84 | 9200 | 0.5672 | | 0.548 | 26.12 | 9300 | 0.6073 | | 0.5436 | 26.4 | 9400 | 0.5739 | | 0.6006 | 26.69 | 9500 | 0.6101 | | 0.5519 | 26.97 | 9600 | 0.5869 | | 0.5432 | 27.25 | 9700 | 0.5721 | | 0.5597 | 27.53 | 9800 | 0.5807 | | 0.5254 | 27.81 | 9900 | 0.5849 | | 0.5366 | 28.09 | 10000 | 0.5831 | | 0.5654 | 28.37 | 10100 | 0.5993 | | 0.57 | 28.65 | 10200 | 0.5892 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.0+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "photograph", "illustration", "map", "comics/cartoon", "editorial cartoon", "headline", "advertisement" ]
AiAdam/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Natoshir/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
KasperRH/Raiyan_Kasper_Model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Raiyan_Kasper_Model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
lauralex/coco_DBD_finetuned
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # coco_DBD_finetuned This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the loader_script dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "bleeding", "blessed", "blindness", "bloodlust", "broken", "cursed", "deepwound", "endurance", "exhaustion", "exposed", "gliph", "haste", "hearing", "hindered", "incapacitated", "madness", "mangled", "oblivious", "sleeppenalty", "undetectable", "vision" ]
TopKek/detr-resnet-50_plastic_in_river_1ep
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_plastic_in_river_1ep This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the plastic_in_river dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "plastic_bag", "plastic_bottle", "other_plastic_waste", "not_plastic_waste" ]
Someshfengde/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
ndkhanh95/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
jjlira/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
ofields/violet-v1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "n/a", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "n/a", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "n/a", "backpack", "umbrella", "n/a", "n/a", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "n/a", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "n/a", "dining table", "n/a", "n/a", "toilet", "n/a", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "n/a", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]
OttoYu/Tree-ConditionHK
# 🌳 Tree Condition Classification 樹況分類 (bilingual) ### Model Description This online application covers 22 most typical tree disease over 290+ images. If you find any trees that has hidden injures, you can classifies with our model and report the tree condition via this form (https://rb.gy/c1sfja). 此在線程式涵蓋22種官方部門樹況分類的標準,超過290張圖像。如果您發現任何樹木有隱傷,您可以使用我們的模型進行分類並通過此表格報告樹木狀況。 - **Developed by:** Yu Kai Him Otto - **Shared via:** Huggingface.co - **Model type:** Opensource ## Uses You can use the this model for tree condition image classification. ## Training Details ### Training Data - Loss: 0.355 - Accuracy: 0.852 - Macro F1: 0.787 - Micro F1: 0.852 - Weighted F1: 0.825 - Macro Precision: 0.808 - Micro Precision: 0.852 - Weighted Precision: 0.854 - Macro Recall: 0.811 - Micro Recall: 0.852 - Weighted Recall: 0.852
[ "burls 節瘤", "canker 潰瘍", "fungal fruiting bodies 真菌子實體", "galls 腫瘤 ", "girdling root 纏繞根 ", "heavy lateral limb 重側枝", "included bark 內夾樹皮", "parasitic or epiphytic plants 寄生或附生植物", "pest and disease 病蟲害", "poor taper 不良漸尖生長", "root-plate movement 根基移位 ", "sap flow 滲液", "co-dominant branches 等勢枝", "trunk girdling 纏繞樹幹 ", "wounds or mechanical injury 傷痕或機械破損", "co-dominant stems 等勢幹", "cracks or splits 裂縫或裂開", "crooks or abrupt bends 不常規彎曲", "cross branches 疊枝", "dead surface roots 表根枯萎 ", "deadwood 枯木", "decay or cavity 腐爛或樹洞" ]
hieule/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
whyoke/object_detection_test_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # object_detection_test_1 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.0 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
murkasad/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
RaphaelKalandadze/tmp_trainer
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_trainer This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu116 - Tokenizers 0.13.2
[ "aop", "asc", "bio", "fairtrade", "frenchpoultrymeat", "frenchvealmeat", "fsc", "gluten_free", "igp", "lebelrouge", "nutriscore", "organic", "stg", "sustainable_fisheries", "vbf", "vegetarian", "vpf" ]
wieheistdu/obj_detection_5epochs
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # obj_detection_5epochs This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.27.3 - Pytorch 2.0.0+cpu - Datasets 2.10.1 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
ntnxx2/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.2
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
ismadoukkali/detr-resnet-50_finetuned_OCR
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_OCR This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cpu - Datasets 2.11.0 - Tokenizers 0.13.2
[ "a aseguradora agencia", "a aseguradora agencia direccion", "a aseguradora agencia nombre", "a aseguradora carta_verde", "a aseguradora carta_verde_desde", "a aseguradora carta_verde_hasta", "a aseguradora danos_propios_no", "a aseguradora danos_propios_si", "a aseguradora num_poliza", "a aseguradora pais", "a aseguradora telefono", "a aseguradora_nombre", "a conductor apellidos", "a conductor categoria", "a conductor danos_apreciados", "a conductor direccion", "a conductor fecha_nac", "a conductor nombre", "a conductor pais", "a conductor permiso", "a conductor telefono", "a conductor valido_hasta", "a remolque matricula", "a remolque pais", "a vehiculo marca_modelo", "a vehiculo matricula", "a vehiculo pais", "a 1", "a 10", "a 11", "a 12", "a 13", "a 14", "a 15", "a 16", "a 17", "a 2", "a 3", "a 4", "a 5", "a 6", "a 7", "a 8", "a 9", "a asegurado apellidos", "a asegurado codigo_postal", "a asegurado direccion", "a asegurado nombre", "a asegurado pais", "a asegurado telefono", "a n_casillas", "b 1", "b 10", "b 11", "b 12", "b 13", "b 14", "b 15", "b 16", "b 17", "b 2", "b 3", "b 4", "b 5", "b 6", "b 7", "b 8", "b 9", "b asegurado apellidos", "b asegurado codigo_postal", "b asegurado direccion", "b asegurado nombre", "b asegurado pais", "b asegurado telefono", "b aseguradora agencia", "b aseguradora agencia direccion", "b aseguradora agencia nombre", "b aseguradora agencia pais", "b aseguradora carta_verde", "b aseguradora carta_verde_desde", "b aseguradora carta_verde_hasta", "b aseguradora danos_propios no", "b aseguradora danos_propios si", "b aseguradora n_poliza", "b aseguradora telefono", "b aseguradora_nombre", "b conductor apellidos", "b conductor categoria", "b conductor direccion", "b conductor fecha_nac", "b conductor nombre", "b conductor pais", "b conductor permiso", "b conductor telefono", "b conductor valido_hasta", "b danos_apreciados", "b n_casillas", "b remolque matricula", "b remolque pais", "b vehiculo marca_modelo", "b vehiculo matricula", "b vehiculo pais", "danos_materiales objetos no", "danos_materiales objetos si", "danos_materiales vehiculos si", "danos_materiales vehículos no", "fecha", "hora", "localizacion pais", "lugar", "testigos", "victimas no", "victimas si" ]
biglam/detr-resnet-50_fine_tuned_loc-2023
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_fine_tuned_loc-2023 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the loc_beyond_words dataset. It achieves the following results on the evaluation set: - Loss: 0.8784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.731 | 0.16 | 50 | 2.6356 | | 2.4875 | 0.31 | 100 | 2.2348 | | 2.1786 | 0.47 | 150 | 2.1148 | | 1.9845 | 0.62 | 200 | 1.8847 | | 1.8507 | 0.78 | 250 | 1.8331 | | 1.6813 | 0.94 | 300 | 1.5620 | | 1.5613 | 1.09 | 350 | 1.5898 | | 1.4966 | 1.25 | 400 | 1.4161 | | 1.4831 | 1.41 | 450 | 1.4831 | | 1.4587 | 1.56 | 500 | 1.3218 | | 1.433 | 1.72 | 550 | 1.3529 | | 1.33 | 1.88 | 600 | 1.2453 | | 1.2842 | 2.03 | 650 | 1.2956 | | 1.2807 | 2.19 | 700 | 1.1993 | | 1.1767 | 2.34 | 750 | 1.1557 | | 1.2134 | 2.5 | 800 | 1.1393 | | 1.1897 | 2.66 | 850 | 1.2016 | | 1.1784 | 2.81 | 900 | 1.1235 | | 1.2016 | 2.97 | 950 | 1.1378 | | 1.06 | 3.12 | 1000 | 1.0803 | | 1.1124 | 3.28 | 1050 | 1.1145 | | 1.1191 | 3.44 | 1100 | 1.0523 | | 1.0819 | 3.59 | 1150 | 1.0165 | | 1.1196 | 3.75 | 1200 | 1.0349 | | 1.0534 | 3.91 | 1250 | 1.0441 | | 1.0365 | 4.06 | 1300 | 1.1177 | | 0.9853 | 4.22 | 1350 | 1.0721 | | 0.9984 | 4.38 | 1400 | 0.9923 | | 0.9802 | 4.53 | 1450 | 1.0079 | | 1.04 | 4.69 | 1500 | 1.0198 | | 1.098 | 4.84 | 1550 | 0.9788 | | 1.079 | 5.0 | 1600 | 1.0291 | | 1.0664 | 5.16 | 1650 | 0.9691 | | 0.9715 | 5.31 | 1700 | 0.9380 | | 0.9723 | 5.47 | 1750 | 1.0164 | | 1.0019 | 5.62 | 1800 | 1.0064 | | 0.9895 | 5.78 | 1850 | 1.0364 | | 0.9835 | 5.94 | 1900 | 0.9848 | | 0.994 | 6.09 | 1950 | 0.9353 | | 0.9693 | 6.25 | 2000 | 0.9425 | | 0.9413 | 6.41 | 2050 | 0.9173 | | 0.9375 | 6.56 | 2100 | 0.9663 | | 0.952 | 6.72 | 2150 | 0.8951 | | 0.8927 | 6.88 | 2200 | 0.9099 | | 0.8777 | 7.03 | 2250 | 0.9238 | | 0.8976 | 7.19 | 2300 | 0.9715 | | 0.9451 | 7.34 | 2350 | 0.9373 | | 0.8972 | 7.5 | 2400 | 0.8959 | | 0.9393 | 7.66 | 2450 | 1.0062 | | 0.9 | 7.81 | 2500 | 0.8920 | | 0.915 | 7.97 | 2550 | 0.8833 | | 0.9018 | 8.12 | 2600 | 0.8671 | | 0.8272 | 8.28 | 2650 | 0.9304 | | 0.943 | 8.44 | 2700 | 0.8593 | | 0.8667 | 8.59 | 2750 | 0.8875 | | 0.871 | 8.75 | 2800 | 0.8457 | | 0.9023 | 8.91 | 2850 | 0.8448 | | 0.8733 | 9.06 | 2900 | 0.8261 | | 0.8686 | 9.22 | 2950 | 0.8489 | | 0.8412 | 9.38 | 3000 | 0.8244 | | 0.8385 | 9.53 | 3050 | 0.8830 | | 0.891 | 9.69 | 3100 | 0.8349 | | 0.8692 | 9.84 | 3150 | 0.8672 | | 0.8247 | 10.0 | 3200 | 0.8811 | | 0.799 | 10.16 | 3250 | 0.8784 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
[ "photograph", "illustration", "map", "comics/cartoon", "editorial cartoon", "headline", "advertisement" ]
memogamd/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
jfecunha/detr-model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
[ "category", "chunk", "other", "subtitle", "title" ]
chanelcolgate/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
liuliu96/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
talk2raja/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
taicun/test1
test
[ "pointer", "scale" ]
marionapique/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the load_dataset_svhn dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" ]
TopKek/plastic_detection_rn101_20ep
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plastic_detection_rn101_20ep This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the plastic_in_river dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "plastic_bag", "plastic_bottle", "other_plastic_waste", "not_plastic_waste" ]
sanali209/detr-test
# detr-test generated from custom dataset Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
[ "angel", "daemon", "dark elf", "draenei", "dragon", "dworf", "elf", "human", "mermaid", "naga", "ogr", "ork", "snake", "spaider", "tauren", "trol", "undead", "wolf" ]
woutervd/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
helezabi/detr_flowcharts_finetuned
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr_flowcharts_finetuned This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "circle", "rectangle", "diamond", "text", "arrow" ]
Bytecube/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Eshwar14/ppe_yolos_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ppe_yolos_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Satish678/UIED
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # UIED This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
[ "button", "calendar", "checkbox", "close", "down_triangle", "dropdown", "edittext", "icon", "left_triangle", "minus", "pencil", "radio", "right_triangle", "search", "up_triangle" ]
Schnitzl/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.13.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
AtomGradient/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5_test This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
hayatu/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.0 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Yethi/UIED_DETR
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # UIED_DETR This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.0 - Tokenizers 0.13.3
[ "button", "calendar", "checkbox", "close", "down_triangle", "dropdown", "edittexxt", "icon", "left_triangle", "minus", "pencil", "radio", "right_triangle", "search", "up_triangle" ]
jackie68/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "coverall", "face_shield", "gloves", "goggles", "mask" ]
Madhav1988/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
Madhav1988/candy-finetuned
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # candy-finetuned This model is a fine-tuned version of [Madhav1988/candy-finetuned](https://huggingface.co/Madhav1988/candy-finetuned) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
AlexLien/trained_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
taohungchang/trained_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
iammartian0/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the forklift-object-detection dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "forklift", "person" ]
William0609/trained_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model This model was trained from scratch on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
taohungchang/candy_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # candy_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 350 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
otisfeng/detr-resnet-50_finetuned_candy_data
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_candy_data This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 250 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
AkshayShetty/detr-resnet-50_finetuned_cppe5
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_cppe5 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
aaa950739/trained_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]
Yuqii/finetuned_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_model This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 300 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
[ "black_star", "cat", "grey_star", "insect", "moon", "owl", "unicorn_head", "unicorn_whole" ]