See axolotl config
axolotl version: 0.4.0
base_model: croissantllm/CroissantLLMBase
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizerFast
is_llama_derived_model: true
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: manu/dataset_1
split: train
type: sharegpt
chat_template: "chatml"
default_system_message: ""
dataset_prepared_path: new_pii
val_set_size: 0.05
output_dir: /gpfs/workdir/fayssema/models/out_translation
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00003
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
flash_attn_cross_entropy: false
flash_attn_rms_norm: true
flash_attn_fuse_qkv: false
flash_attn_fuse_mlp: true
warmup_steps: 100
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: #deepspeed_configs/zero2.json # multi-gpu only
weight_decay: 0.05
fsdp:
fsdp_config:
gpfs/workdir/fayssema/models/out_translation
This model is a fine-tuned version of croissantllm/CroissantLLMBase on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0098
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.6652 | 0.0 | 1 | 2.0261 |
0.2986 | 0.25 | 73 | 0.0199 |
0.19 | 0.5 | 146 | 0.0136 |
0.3032 | 0.76 | 219 | 0.0158 |
0.1343 | 1.01 | 292 | 0.0125 |
0.12 | 1.26 | 365 | 0.0117 |
0.2266 | 1.51 | 438 | 0.0113 |
0.1924 | 1.77 | 511 | 0.0097 |
0.1448 | 2.02 | 584 | 0.0095 |
0.0718 | 2.27 | 657 | 0.0098 |
0.1184 | 2.52 | 730 | 0.0097 |
0.1124 | 2.77 | 803 | 0.0098 |
Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for manu/dataset_1_model
Base model
croissantllm/CroissantLLMBase