Update README.md
Browse files
README.md
CHANGED
@@ -30,13 +30,14 @@ Model:
|
|
30 |
4. Trainable Params: 136,839
|
31 |
5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821
|
32 |
|
33 |
-
Room for Improvement:
|
34 |
|
35 |
This model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner:
|
36 |
|
37 |
1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found.
|
38 |
2. Data augmentation in order to better correct for class imbalances.
|
39 |
-
3. Using
|
|
|
40 |
|
41 |
|
42 |
## Uses
|
@@ -119,9 +120,8 @@ Use the code below to get started with the model locally:
|
|
119 |
main()
|
120 |
|
121 |
|
122 |
-
### Training Data
|
123 |
|
124 |
-
|
125 |
|
126 |
#### Preprocessing [optional]
|
127 |
|
|
|
30 |
4. Trainable Params: 136,839
|
31 |
5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821
|
32 |
|
33 |
+
## Room for Improvement:
|
34 |
|
35 |
This model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner:
|
36 |
|
37 |
1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found.
|
38 |
2. Data augmentation in order to better correct for class imbalances.
|
39 |
+
3. Using learning rate decay to train for longer (with lower LR) after nearing local minima (aprox 60 epochs).
|
40 |
+
4. Error Analysis
|
41 |
|
42 |
|
43 |
## Uses
|
|
|
120 |
main()
|
121 |
|
122 |
|
|
|
123 |
|
124 |
+
|
125 |
|
126 |
#### Preprocessing [optional]
|
127 |
|