hassonofer commited on
Commit
a3da6d1
·
verified ·
1 Parent(s): b84dbda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - birder
5
+ - pytorch
6
+ library_name: birder
7
+ license: apache-2.0
8
+ ---
9
+
10
+ # Model Card for efficientnet_v2_s_arabian-peninsula
11
+
12
+ A Resnet v1 image classification model. This model was trained on the `arabian-peninsula` dataset (all the relevant bird species found in the Arabian peninsula inc. rarities).
13
+
14
+ The species list is derived from data available at <https://avibase.bsc-eoc.org/checklist.jsp?region=ARA>.
15
+
16
+ ## Model Details
17
+
18
+ - **Model Type:** Image classification and detection backbone
19
+ - **Model Stats:**
20
+ - Params (M): 21.1
21
+ - Input image size: 256 x 256
22
+ - **Dataset:** arabian-peninsula (735 classes)
23
+
24
+ - **Papers:**
25
+ - EfficientNetV2: Smaller Models and Faster Training: <https://arxiv.org/abs/2104.00298>
26
+
27
+ ## Model Usage
28
+
29
+ ### Image Classification
30
+
31
+ ```python
32
+ import birder
33
+ from birder.inference.classification import infer_image
34
+
35
+ (net, model_info) = birder.load_pretrained_model("efficientnet_v2_s_arabian-peninsula", inference=True)
36
+
37
+ # Get the image size the model was trained on
38
+ size = birder.get_size_from_signature(model_info.signature)
39
+
40
+ # Create an inference transform
41
+ transform = birder.classification_transform(size, model_info.rgb_stats)
42
+
43
+ image = "path/to/image.jpeg" # or a PIL image, must be loaded in RGB format
44
+ (out, _) = infer_image(net, image, transform)
45
+ # out is a NumPy array with shape of (1, 735), representing class probabilities.
46
+ ```
47
+
48
+ ### Image Embeddings
49
+
50
+ ```python
51
+ import birder
52
+ from birder.inference.classification import infer_image
53
+
54
+ (net, model_info) = birder.load_pretrained_model("efficientnet_v2_s_arabian-peninsula", inference=True)
55
+
56
+ # Get the image size the model was trained on
57
+ size = birder.get_size_from_signature(model_info.signature)
58
+
59
+ # Create an inference transform
60
+ transform = birder.classification_transform(size, model_info.rgb_stats)
61
+
62
+ image = "path/to/image.jpeg" # or a PIL image
63
+ (out, embedding) = infer_image(net, image, transform, return_embedding=True)
64
+ # embedding is a NumPy array with shape of (1, 1280)
65
+ ```
66
+
67
+ ### Detection Feature Map
68
+
69
+ ```python
70
+ from PIL import Image
71
+ import birder
72
+
73
+ (net, model_info) = birder.load_pretrained_model("efficientnet_v2_s_arabian-peninsula", inference=True)
74
+
75
+ # Get the image size the model was trained on
76
+ size = birder.get_size_from_signature(model_info.signature)
77
+
78
+ # Create an inference transform
79
+ transform = birder.classification_transform(size, model_info.rgb_stats)
80
+
81
+ image = Image.open("path/to/image.jpeg")
82
+ features = net.detection_features(transform(image).unsqueeze(0))
83
+ # features is a dict (stage name -> torch.Tensor)
84
+ print([(k, v.size()) for k, v in features.items()])
85
+ # Output example:
86
+ # [('stage1', torch.Size([1, 48, 64, 64])),
87
+ # ('stage2', torch.Size([1, 64, 32, 32])),
88
+ # ('stage3', torch.Size([1, 160, 16, 16])),
89
+ # ('stage4', torch.Size([1, 256, 8, 8]))]
90
+ ```
91
+
92
+ ## Citation
93
+
94
+ ```bibtex
95
+ @misc{tan2021efficientnetv2smallermodelsfaster,
96
+ title={EfficientNetV2: Smaller Models and Faster Training},
97
+ author={Mingxing Tan and Quoc V. Le},
98
+ year={2021},
99
+ eprint={2104.00298},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.CV},
102
+ url={https://arxiv.org/abs/2104.00298},
103
+ }
104
+ ```