See axolotl config
axolotl version: 0.10.0.dev0
base_model: RedHatAI/Sparse-Llama-3.1-8B-2of4
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: trl-lib/tldr
type:
system_prompt: "Give a TL;DR of the following Reddit post."
field_system: system
field_instruction: prompt
field_output: completion
format: "<|user|>\n{instruction}\n<|assistant|>\n"
no_input_format: "<|user|>\n{instruction}\n<|assistant|>\n"
split: train
dataset_prepared_path: last_run_prepared
output_dir: Sparse-Llama-3.1-8B-2of4-tldr
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: true
torch.compile: true
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 1e-5
max_grad_norm: 1.0
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
train_on_inputs: false
bf16: auto
fp16:
tf32: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
warmup_ratio: 0.05
eval_steps: 0.05
val_set_size: 0.05
save_strategy: "best"
metric_for_best_model: "loss"
debug:
deepspeed:
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
seed: 0
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.llm_compressor.LLMCompressorPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
llmcompressor:
recipe:
finetuning_stage:
finetuning_modifiers:
ConstantPruningModifier:
targets: [
're:.*q_proj.weight',
're:.*k_proj.weight',
're:.*v_proj.weight',
're:.*o_proj.weight',
're:.*gate_proj.weight',
're:.*up_proj.weight',
're:.*down_proj.weight',
]
start: 0
save_compressed: true
Sparse-Llama-3.1-8B-2of4-tldr
This model is a fine-tuned version of RedHatAI/Sparse-Llama-3.1-8B-2of4 on the trl-lib/tldr dataset. It achieves the following results on the evaluation set:
- Loss: 1.8295
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 66
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.4149 | 0.0008 | 1 | 2.2321 |
1.9603 | 0.0505 | 67 | 1.8758 |
1.8909 | 0.1010 | 134 | 1.8560 |
1.8109 | 0.1515 | 201 | 1.8491 |
1.7688 | 0.2020 | 268 | 1.8441 |
1.8535 | 0.2524 | 335 | 1.8411 |
1.773 | 0.3029 | 402 | 1.8381 |
1.8349 | 0.3534 | 469 | 1.8360 |
1.8382 | 0.4039 | 536 | 1.8342 |
1.7975 | 0.4544 | 603 | 1.8328 |
1.8171 | 0.5049 | 670 | 1.8317 |
1.8309 | 0.5554 | 737 | 1.8309 |
1.8158 | 0.6059 | 804 | 1.8303 |
1.8684 | 0.6564 | 871 | 1.8298 |
1.7743 | 0.7069 | 938 | 1.8296 |
1.7132 | 0.7573 | 1005 | 1.8295 |
1.7912 | 0.8078 | 1072 | 1.8294 |
1.9432 | 0.8583 | 1139 | 1.8295 |
1.7789 | 0.9088 | 1206 | 1.8294 |
1.8084 | 0.9593 | 1273 | 1.8295 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for nm-testing/Sparse-Llama-3.1-8B-2of4-tldr
Base model
meta-llama/Llama-3.1-8B
Finetuned
RedHatAI/Sparse-Llama-3.1-8B-2of4