vpakarinen commited on
Commit
68b80e3
·
verified ·
1 Parent(s): 98b2e3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -38
README.md CHANGED
@@ -90,42 +90,6 @@ weight_decay: 0.0
90
 
91
  # outputs/mymodel
92
 
93
- This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the ICEPVP8977/Uncensored_Small_Reasoning dataset.
94
 
95
- ## Model description
96
-
97
- More information needed
98
-
99
- ## Intended uses & limitations
100
-
101
- More information needed
102
-
103
- ## Training and evaluation data
104
-
105
- More information needed
106
-
107
- ## Training procedure
108
-
109
- ### Training hyperparameters
110
-
111
- The following hyperparameters were used during training:
112
- - learning_rate: 0.0002
113
- - train_batch_size: 8
114
- - eval_batch_size: 8
115
- - seed: 42
116
- - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
117
- - lr_scheduler_type: cosine
118
- - lr_scheduler_warmup_steps: 17
119
- - num_epochs: 1.0
120
-
121
- ### Training results
122
-
123
-
124
-
125
- ### Framework versions
126
-
127
- - PEFT 0.14.0
128
- - Transformers 4.49.0
129
- - Pytorch 2.5.1+cu124
130
- - Datasets 3.2.0
131
- - Tokenizers 0.21.0
 
90
 
91
  # outputs/mymodel
92
 
93
+ Fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the ICEPVP8977/Uncensored_Small_Reasoning dataset.
94
 
95
+ This lora model will fully uncensor the llama 3.1 8b model, use alpaca instruction template.