philosophy-mistral / README.md
Heralax's picture
files
f6e290f
|
raw
history blame
4.58 kB
metadata
library_name: transformers
license: apache-2.0
base_model: Heralax/philosophy-llm-mistral-pretrain
tags:
  - generated_from_trainer
model-index:
  - name: philosophy-hardcore-pretraining
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

# This is an axolotl config that allowed creation of a model knowledgeable about hawaii.
# Replace the dataset paths under `datasets:` with your own
# If you want a reference point of what kind of data was fed into this model, check out hawaiitoolkit https://github.com/e-p-armstrong/hawaiitoolkit.git

# Rent a GPU with a compute provider like Vast.ai or Runpod
# (Make sure it is using the axolotl docker image --- winglian/axolotl:main-latest)
# Copy this file over to the rented instance, in the /workspace/axolotl directory
# If running on a single-GPU setup, you must run:
# conda install -c conda-forge mpi4py mpich
# Then run this command from the /workspace/axolotl directory:
# accelerate launch --use_deepspeed -m axolotl.cli.train axolotl_config_hawaii_llama3_Jun_9_2024.yaml

# If using GaLore, do not use deepspeed

# (to copy files over to a rented GPU instance, you'll have to use SSH to Secure CoPy files over from your machine to the rented one. This is what such a command might look like, adapt it to your needs)
# scp -P 40001 -r ./ [email protected]:/workspace/axolotl/


# TODO to properly make this great, MAKE VARIED SYSTEM PROMPTS FOR ALL THINGS IN THE hawaii DATASET.
# And make automated code to produce it so that I built it for this project and not the other one.
# OK, now I am truly back to working on the efficiency problem.

base_model: Heralax/philosophy-llm-mistral-pretrain
tokenizer_type: AutoTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: json
    data_files: philosophy_qa_normal.jsonl
    ds_type: json
    type: sharegpt
    conversation: chatml
  - path: json
    data_files: philosophy_qa_open-ended.jsonl
    ds_type: json
    type: sharegpt
    conversation: chatml
  - path: json
    data_files: philosophy_qa_negative.jsonl
    ds_type: json
    type: sharegpt
    conversation: chatml

dataset_prepared_path: last_run_prepared
output_dir: ./philosophy-hardcore-pretraining

sequence_len: 4096
sample_packing: false
pad_to_sequence_len: true
shuffle_merged_datasets: true

wandb_project: mistral-philosophy
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 6
micro_batch_size: 2
eval_batch_size: 1
num_epochs: 6
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.000020
weight_decay: 0
# Gradient clipping max norm
max_grad_norm: 1.0
noisy_embedding_alpha: 0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: unsloth
early_stopping_patience:
resume_from_checkpoint: 
logging_steps: 1
xformers_attention:
flash_attention: true

chat_template: chatml

warmup_ratio: 0.5
auto_resume_from_checkpoints: false
#warmup_ratio: 0.5
eval_steps: 10
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 3
debug:
deepspeed: deepspeed_configs/zero2.json
special_tokens:
  pad_token: "<|end_of_text|>"

philosophy-hardcore-pretraining

This model is a fine-tuned version of Heralax/philosophy-llm-mistral-pretrain on the None dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 6
  • gradient_accumulation_steps: 6
  • total_train_batch_size: 72
  • total_eval_batch_size: 6
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 136
  • num_epochs: 6

Training results

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.3.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1