library_name: transformers
license: llama3.1
base_model: Heralax/demo-nursing-model-pretrain
tags:
- axolotl
- generated_from_trainer
datasets:
- axolotl_correction_conversations_hidden-openstax-nursing.json
- axolotl_rag_conversations_hidden-openstax-nursing.jsonl
- pretraining_subset_1421673.jsonl
- factual_sft_completion/combined_all_0.jsonl
- factual_sft_completion/combined_all_2.jsonl
- factual_sft_completion/combined_all_3.jsonl
- factual_sft_completion/combined_all_1.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2012946.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_503236.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_251557.jsonl
- >-
generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_1006231.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_503236.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_251557.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_503115.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_503236.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_251557.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1006473.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_503236.jsonl
- >-
generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_251557.jsonl
model-index:
- name: demo-nursing-model-sft-2
results: []
base_model: Heralax/demo-nursing-model-pretrain
tokenizer_type: AutoTokenizer
model_type: AutoModelForCausalLM
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: axolotl_correction_conversations_hidden-openstax-nursing.json
type: input_output
- path: axolotl_rag_conversations_hidden-openstax-nursing.jsonl
type: input_output
- path: pretraining_subset_1421673.jsonl
type: completion
- path: factual_sft_completion/combined_all_0.jsonl
type: completion
- path: factual_sft_completion/combined_all_2.jsonl
type: completion
- path: factual_sft_completion/combined_all_3.jsonl
type: completion
- path: factual_sft_completion/combined_all_1.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2012946.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_503236.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_251557.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_1006231.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_503236.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_251557.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_503115.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_503236.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_251557.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1006473.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_503236.jsonl
type: completion
- path: generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_251557.jsonl
type: completion
dataset_prepared_path: last_finetune_prepared
output_dir: ./finetune-model-output
seed: 1337
sequence_len: 5000
sample_packing: true
pad_to_sequence_len: false
shuffle_merged_datasets: true
gradient_accumulation_steps: 75
micro_batch_size: 2
eval_batch_size: 4
num_epochs: 5
optimizer: paged_adamw_8bit
lr_scheduler: constant
learning_rate: 2.0e-05
noisy_embedding_alpha: 5
weight_decay: 0
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
xformers_attention: false
flash_attention: true
chat_template: chatml
auto_resume_from_checkpoints: false
warmup_ratio: 0.1
evals_per_epoch: 1
val_set_size: 0.04
saves_per_epoch: 1
eval_sample_packing: false
save_total_limit: 2
special_tokens:
pad_token: <unk>
use_liger_kernel: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
sequence_length: 10000
wandb_project: test-project
wandb_entity: ''
wandb_watch: ''
wandb_run_id: ''
wandb_log_model: ''
hub_model_id: Heralax/demo-nursing-model-sft-2
hub_strategy: all_checkpoints
demo-nursing-model-sft-2
This model is a fine-tuned version of Heralax/demo-nursing-model-pretrain on the axolotl_correction_conversations_hidden-openstax-nursing.json, the axolotl_rag_conversations_hidden-openstax-nursing.jsonl, the pretraining_subset_1421673.jsonl, the factual_sft_completion/combined_all_0.jsonl, the factual_sft_completion/combined_all_2.jsonl, the factual_sft_completion/combined_all_3.jsonl, the factual_sft_completion/combined_all_1.jsonl, the generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_2012946.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_503236.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_251557.jsonl, the generic_sft_completion/Augmentoolkit-Openthoughts-100mil-DifferentFormat_1006231.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_503236.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_251557.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_503115.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-LMsys-800k-Thoughts_503236.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Pippa-Thoughts_251557.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Generic-Grabbag-Thoughts_1006473.jsonl, the generic_sft_completion/Augmentoolkit-Augmentoolkit-Capybara-2point5mil-Thoughts_503236.jsonl and the generic_sft_completion/Augmentoolkit-Augmentoolkit-Bluemoon-1mil-thoughts_251557.jsonl datasets. It achieves the following results on the evaluation set:
- Loss: 0.8211
This is a demo model produced by running Augmentoolkit's Factual Finetuning pipeline on two OpenStax books on Nursing. I picked this subject because it's probably not in the public eye too much, and because there were a number of open-source books available on it.
The following books were trained on:
Clinical-Nursing-Skills-WEB.pdf
Fundamentals_of_Nursing_-_WEB.pdf
The prompt.txt
, template.txt
, RAG dataset, and GGUF file are all inside this folder so that people can run this model themselves using Augmentoolkit's chat interface. Just download the things not in the checkpoint-xx/ folders (not the model.safetensors files), put them all in a folder, and configure the basic-server or rag-server config to point at the prompt, template, etc., (see the documentation pages for those utility pipelines) and bang, Augmentoolkit will run these models with the correct prompt template and configuration.
Stop sequence == "**Finished.**"
Why did I do it like that? Because the more SFT text resembles the pretraining text, the more that knowledge and capabilities from the pretraining will carry over to the SFT. Convention and chatml be damned, I like better performance.
Related Links:
Q: Why the Llama license?
A: The quickstart uses Llama 3 to generate the data for the sake of speed and hardware compatibility. Therefore, the Llama license applies to this demo model.