FelaKuti commited on
Commit
0a8ca07
·
verified ·
1 Parent(s): 4f39cc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -30,13 +30,14 @@ Model:
30
  4. Trainable Params: 136,839
31
  5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821
32
 
33
- Room for Improvement:
34
 
35
  This model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner:
36
 
37
  1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found.
38
  2. Data augmentation in order to better correct for class imbalances.
39
- 3. Using a learning rate scheduler to train for longer (with lower LR) after nearing local minima (aprox 60 epochs).
 
40
 
41
 
42
  ## Uses
@@ -119,9 +120,8 @@ Use the code below to get started with the model locally:
119
  main()
120
 
121
 
122
- ### Training Data
123
 
124
- Dataset used: FER (available on Kaggle)
125
 
126
  #### Preprocessing [optional]
127
 
 
30
  4. Trainable Params: 136,839
31
  5. Accuracy: 0.823 | Precision: 0.825 | Recall: 0.823 | F1: 0.821
32
 
33
+ ## Room for Improvement:
34
 
35
  This model was created with extremely limited hardware acceleration (GPU) resources. Therefore, it is high likely that evaluation metrics that surpass the 95% mark can be achieved in the following manner:
36
 
37
  1. MobileNetv2 was used for its fast inference and low latency but perhaps, with more resources, a more suitable base model can be found.
38
  2. Data augmentation in order to better correct for class imbalances.
39
+ 3. Using learning rate decay to train for longer (with lower LR) after nearing local minima (aprox 60 epochs).
40
+ 4. Error Analysis
41
 
42
 
43
  ## Uses
 
120
  main()
121
 
122
 
 
123
 
124
+
125
 
126
  #### Preprocessing [optional]
127