File size: 1,647 Bytes
84d608a 4efeafb b5153e1 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb b5153e1 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a 4efeafb 84d608a b5153e1 84d608a 4efeafb 84d608a 4efeafb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
library_name: transformers
base_model: Jennny/llama3_8b_sft_ultrafb
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: llama3_8b_honest_rm_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3_8b_honest_rm_full
This model is a fine-tuned version of [Jennny/llama3_8b_sft_ultrafb](https://huggingface.co/Jennny/llama3_8b_sft_ultrafb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2657
- Accuracy: 0.908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.3012 | 0.4632 | 50 | 0.2989 | 0.886 |
| 0.2851 | 0.9265 | 100 | 0.2657 | 0.908 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|