You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Built with Axolotl

See axolotl config

axolotl version: 0.7.0

base_model: mistralai/Mistral-Small-24B-Instruct-2501
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

model_config:
  trust_remote_code: true
  tokenizer:
    pad_token: "</s>"
    padding_side: "right"

datasets:
  - path: data/data.jsonl
    type: chat_template
    field_messages: conversations
    message_field_role: role
    message_field_content: content
    roles:
      user: ["user"]
      assistant: ["assistant"]

load_in_4bit: true
adapter: qlora
lora_r: 64
lora_alpha: 32
lora_dropout: 0.1
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
  - o_proj
  - gate_proj
  - up_proj
  - down_proj

wandb_project: mawzilla
wandb_name: Purring-stats

val_set_size: 0.01
evals_per_epoch: 2
eval_sample_packing: false
eval_max_new_tokens: 128


bf16: true
flash_attention: true
flash_attn_implementation: "flash_attention_2"
gradient_checkpointing: true
deepspeed: deepspeed_configs/zero3.json

gradient_accumulation_steps: 4
micro_batch_size: 8
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 1e-5
warmup_ratio: 0.03

max_seq_length: 8192
pad_to_sequence_len: false
sample_packing: false

output_dir: ./output
save_steps: 100
logging_steps: 10
save_safetensors: true

special_tokens:
  pad_token: "</s>"

output

This model is a fine-tuned version of mistralai/Mistral-Small-24B-Instruct-2501 on the data/data.jsonl dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9389

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use paged_adamw_32bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 28
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
No log 0.0031 1 2.7696
2.187 0.5012 160 2.1111
2.1016 1.0 320 2.0063
1.9829 1.5012 480 1.9645
1.9534 2.0 640 1.9441
1.8833 2.5012 800 1.9389

Framework versions

  • PEFT 0.14.0
  • Transformers 4.48.3
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.1
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Mawdistical/Mawdistic-NightLife-LoRa