mmomm25 commited on
Commit
42c405c
·
verified ·
1 Parent(s): e0f1d47

Model save

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: microsoft/swinv2-base-patch4-window12-192-22k
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ - precision
12
+ - recall
13
+ model-index:
14
+ - name: swinv2-base-patch4-window12-192-22k-ConcreteClassifier-PVT
15
+ results:
16
+ - task:
17
+ name: Image Classification
18
+ type: image-classification
19
+ dataset:
20
+ name: imagefolder
21
+ type: imagefolder
22
+ config: default
23
+ split: train
24
+ args: default
25
+ metrics:
26
+ - name: Accuracy
27
+ type: accuracy
28
+ value:
29
+ accuracy: 0.6160830090791181
30
+ - name: F1
31
+ type: f1
32
+ value:
33
+ f1: 0.6252886431691335
34
+ - name: Precision
35
+ type: precision
36
+ value:
37
+ precision: 0.6429213069076691
38
+ - name: Recall
39
+ type: recall
40
+ value:
41
+ recall: 0.6390705914040895
42
+ ---
43
+
44
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
45
+ should probably proofread and complete it, then remove this comment. -->
46
+
47
+ # swinv2-base-patch4-window12-192-22k-ConcreteClassifier-PVT
48
+
49
+ This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-base-patch4-window12-192-22k) on the imagefolder dataset.
50
+ It achieves the following results on the evaluation set:
51
+ - Loss: 0.9423
52
+ - Accuracy: {'accuracy': 0.6160830090791181}
53
+ - F1: {'f1': 0.6252886431691335}
54
+ - Precision: {'precision': 0.6429213069076691}
55
+ - Recall: {'recall': 0.6390705914040895}
56
+
57
+ ## Model description
58
+
59
+ More information needed
60
+
61
+ ## Intended uses & limitations
62
+
63
+ More information needed
64
+
65
+ ## Training and evaluation data
66
+
67
+ More information needed
68
+
69
+ ## Training procedure
70
+
71
+ ### Training hyperparameters
72
+
73
+ The following hyperparameters were used during training:
74
+ - learning_rate: 0.001
75
+ - train_batch_size: 2
76
+ - eval_batch_size: 2
77
+ - seed: 42
78
+ - gradient_accumulation_steps: 4
79
+ - total_train_batch_size: 8
80
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
81
+ - lr_scheduler_type: cosine
82
+ - lr_scheduler_warmup_ratio: 0.1
83
+ - num_epochs: 30
84
+
85
+ ### Training results
86
+
87
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
88
+ |:-------------:|:-----:|:-----:|:---------------:|:---------------------------------:|:---------------------------:|:----------------------------------:|:-------------------------------:|
89
+ | 1.2473 | 1.0 | 1927 | 0.9809 | {'accuracy': 0.5909208819714656} | {'f1': 0.5910435702573883} | {'precision': 0.6661654194259725} | {'recall': 0.6016064778916844} |
90
+ | 1.7959 | 2.0 | 3854 | 1.8580 | {'accuracy': 0.26848249027237353} | {'f1': 0.17635435227079216} | {'precision': 0.15600300783137744} | {'recall': 0.2748312652287413} |
91
+ | 1.8437 | 3.0 | 5781 | 1.9243 | {'accuracy': 0.1743190661478599} | {'f1': 0.1111482743173735} | {'precision': 0.1516816437642854} | {'recall': 0.1887752678966561} |
92
+ | 1.803 | 4.0 | 7708 | 1.9003 | {'accuracy': 0.1846952010376135} | {'f1': 0.1316379146314846} | {'precision': 0.1460659091286986} | {'recall': 0.1928584239028949} |
93
+ | 1.8121 | 5.0 | 9635 | 1.7386 | {'accuracy': 0.33229571984435796} | {'f1': 0.24134878979912178} | {'precision': 0.2476900084001273} | {'recall': 0.3378007007551934} |
94
+ | 1.7984 | 6.0 | 11562 | 1.7322 | {'accuracy': 0.3014267185473411} | {'f1': 0.2074421859419775} | {'precision': 0.17325128260235972} | {'recall': 0.31234162186318526} |
95
+ | 1.7385 | 7.0 | 13489 | 1.6964 | {'accuracy': 0.3120622568093385} | {'f1': 0.2220020856519651} | {'precision': 0.24821744277779434} | {'recall': 0.3239827067963708} |
96
+ | 1.578 | 8.0 | 15416 | 1.6265 | {'accuracy': 0.2827496757457847} | {'f1': 0.20468330816120264} | {'precision': 0.23605832742279897} | {'recall': 0.2983260879145022} |
97
+ | 1.7397 | 9.0 | 17343 | 1.4545 | {'accuracy': 0.41686121919584956} | {'f1': 0.38005456072209515} | {'precision': 0.38606801407175106} | {'recall': 0.43686617410103795} |
98
+ | 1.9688 | 10.0 | 19270 | 1.5583 | {'accuracy': 0.3761348897535668} | {'f1': 0.3427169207054117} | {'precision': 0.3765571735514816} | {'recall': 0.3992290557525892} |
99
+ | 1.5298 | 11.0 | 21197 | 1.3465 | {'accuracy': 0.46718547341115435} | {'f1': 0.4188714217488438} | {'precision': 0.41426640127137376} | {'recall': 0.4948604123766437} |
100
+ | 1.7964 | 12.0 | 23124 | 2.1154 | {'accuracy': 0.2059662775616083} | {'f1': 0.15046942523030551} | {'precision': 0.24607297755729998} | {'recall': 0.1965173812427065} |
101
+ | 1.6734 | 13.0 | 25051 | 1.8294 | {'accuracy': 0.26718547341115434} | {'f1': 0.23757487469054722} | {'precision': 0.2619241634609933} | {'recall': 0.28087243077801927} |
102
+ | 1.4376 | 14.0 | 26978 | 1.3830 | {'accuracy': 0.4254215304798962} | {'f1': 0.41718777684401953} | {'precision': 0.4554477836203232} | {'recall': 0.450319904911865} |
103
+ | 1.3403 | 15.0 | 28905 | 1.2193 | {'accuracy': 0.4926070038910506} | {'f1': 0.44282153705015465} | {'precision': 0.5137836037175402} | {'recall': 0.5162930270137864} |
104
+ | 1.4806 | 16.0 | 30832 | 1.2119 | {'accuracy': 0.5060959792477302} | {'f1': 0.49960050025350783} | {'precision': 0.4999152354253829} | {'recall': 0.5224114833278971} |
105
+ | 1.2526 | 17.0 | 32759 | 1.1265 | {'accuracy': 0.532295719844358} | {'f1': 0.5154889606383344} | {'precision': 0.5564411969038197} | {'recall': 0.5477056677098556} |
106
+ | 1.3673 | 18.0 | 34686 | 1.0845 | {'accuracy': 0.5553826199740597} | {'f1': 0.5594361896393714} | {'precision': 0.5621230845137732} | {'recall': 0.5721227129539878} |
107
+ | 1.241 | 19.0 | 36613 | 1.1499 | {'accuracy': 0.5317769130998703} | {'f1': 0.5281353810835646} | {'precision': 0.5775200417239811} | {'recall': 0.5464467844676986} |
108
+ | 1.2881 | 20.0 | 38540 | 1.0368 | {'accuracy': 0.5732814526588845} | {'f1': 0.5822889712524011} | {'precision': 0.5902075134144437} | {'recall': 0.5939011675173421} |
109
+ | 1.1817 | 21.0 | 40467 | 1.0927 | {'accuracy': 0.562905317769131} | {'f1': 0.5595449409010723} | {'precision': 0.6241213136239965} | {'recall': 0.5901142810563812} |
110
+ | 1.0632 | 22.0 | 42394 | 0.9835 | {'accuracy': 0.5916990920881972} | {'f1': 0.5977863969792085} | {'precision': 0.6133139141030883} | {'recall': 0.6180557435733788} |
111
+ | 1.3852 | 23.0 | 44321 | 0.9644 | {'accuracy': 0.6108949416342413} | {'f1': 0.6180791606479925} | {'precision': 0.6280310097271309} | {'recall': 0.6298244322284886} |
112
+ | 0.9998 | 24.0 | 46248 | 1.0270 | {'accuracy': 0.5730220492866407} | {'f1': 0.5857906297181984} | {'precision': 0.6037780108566702} | {'recall': 0.5926557023955539} |
113
+ | 1.2504 | 25.0 | 48175 | 0.9579 | {'accuracy': 0.6041504539559014} | {'f1': 0.6089085687671967} | {'precision': 0.6356079019561951} | {'recall': 0.6279729594034652} |
114
+ | 0.9085 | 26.0 | 50102 | 0.9158 | {'accuracy': 0.6256809338521401} | {'f1': 0.634525318707402} | {'precision': 0.638920861622337} | {'recall': 0.6478580836265876} |
115
+ | 1.0098 | 27.0 | 52029 | 0.9567 | {'accuracy': 0.598443579766537} | {'f1': 0.6064064760408924} | {'precision': 0.6229095253118345} | {'recall': 0.6214139916724217} |
116
+ | 1.0409 | 28.0 | 53956 | 0.9464 | {'accuracy': 0.611413748378729} | {'f1': 0.6211699172289087} | {'precision': 0.6387402688839067} | {'recall': 0.634161757892515} |
117
+ | 1.3289 | 29.0 | 55883 | 0.9405 | {'accuracy': 0.6176394293125811} | {'f1': 0.6271514140166757} | {'precision': 0.6447687702321006} | {'recall': 0.6400894012742261} |
118
+ | 0.8946 | 30.0 | 57810 | 0.9423 | {'accuracy': 0.6160830090791181} | {'f1': 0.6252886431691335} | {'precision': 0.6429213069076691} | {'recall': 0.6390705914040895} |
119
+
120
+
121
+ ### Framework versions
122
+
123
+ - Transformers 4.37.2
124
+ - Pytorch 2.1.0
125
+ - Datasets 2.17.1
126
+ - Tokenizers 0.15.2