ai-team-ori commited on
Commit
0a5ecf6
·
verified ·
1 Parent(s): 9681b81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -116,7 +116,7 @@ library_name: transformers
116
 
117
  #### Finetuning:
118
  - **Novel Trainer Architecture**: A custom trainer was written to ensure efficient supervised finetuning, with custom callbacks to enable higher observability during the training process.
119
- - **Custom Dynamic Layer Freezing**: Most active layers were identified in the model by running inference on a subset of the training data using the pre-trained models. These layers were then kept frozen during the training process while all the other layers were kept frozen. This enabled faster convergence and efficient finetuning
120
  - **Deepspeed Integration**: Deepspeed was also utilized to speed up, and optimize the training process.
121
 
122
  ### Performance Overview
 
116
 
117
  #### Finetuning:
118
  - **Novel Trainer Architecture**: A custom trainer was written to ensure efficient supervised finetuning, with custom callbacks to enable higher observability during the training process.
119
+ - **Custom Dynamic Layer Freezing**: Most active layers were identified in the model by running inference on a subset of the training data using the pre-trained models. These layers were then kept unfrozen during the training process while all the other layers were kept frozen. This enabled faster convergence and efficient finetuning
120
  - **Deepspeed Integration**: Deepspeed was also utilized to speed up, and optimize the training process.
121
 
122
  ### Performance Overview