Update README.md
Browse files
README.md
CHANGED
@@ -12,26 +12,25 @@ language:
|
|
12 |
- en
|
13 |
---
|
14 |
|
15 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
16 |
-
should probably proofread and complete it, then remove this comment. -->
|
17 |
-
|
18 |
# phi-2-universal-NER
|
19 |
|
20 |
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the Universal-NER/Pile-NER-type dataset.
|
21 |
|
22 |
## Model description
|
23 |
|
24 |
-
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
-
|
29 |
|
30 |
## Training and evaluation data
|
31 |
|
32 |
-
|
|
|
|
|
33 |
|
34 |
-
|
35 |
|
36 |
### Training hyperparameters
|
37 |
|
|
|
12 |
- en
|
13 |
---
|
14 |
|
|
|
|
|
|
|
15 |
# phi-2-universal-NER
|
16 |
|
17 |
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the Universal-NER/Pile-NER-type dataset.
|
18 |
|
19 |
## Model description
|
20 |
|
21 |
+
This model shows power of small language model. We can finetune phi-2 on google colab free version. It's very simple and easy. I couldn't fine tuned whole model on free colab so used PEFT.
|
22 |
|
23 |
## Intended uses & limitations
|
24 |
|
25 |
+
This model is fine tuned from Phi-2 and used dataset is also just for research so this model also just can be used for research purpose.
|
26 |
|
27 |
## Training and evaluation data
|
28 |
|
29 |
+
I have used just 5 epochs in fine tuning.
|
30 |
+
|
31 |
+
## Training procedure notebook
|
32 |
|
33 |
+
https://github.com/mit1280/fined-tuning/blob/main/phi_2_fine_tune_using_PEFT%2Binference.ipynb
|
34 |
|
35 |
### Training hyperparameters
|
36 |
|