|
--- |
|
library_name: transformers |
|
base_model: Heralax/etiquette-pretrain |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: mannerstral |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
Data generated with [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) |
|
|
|
<!-- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) --> |
|
<details><summary>See training config</summary> |
|
|
|
axolotl version: `0.4.1` |
|
```yaml |
|
base_model: Heralax/etiquette-pretrain |
|
tokenizer_type: AutoTokenizer |
|
is_mistral_derived_model: true |
|
load_in_8bit: false |
|
load_in_4bit: false |
|
strict: false |
|
|
|
datasets: |
|
- path: json |
|
data_files: hidden_manners_openended_plain_qa_list.jsonl |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
- path: json |
|
data_files: hidden_manners_normal_plain_qa_list.jsonl |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
- path: json |
|
data_files: hidden_manners_negative_plain_qa_list.jsonl |
|
ds_type: json |
|
type: sharegpt |
|
conversation: chatml |
|
|
|
dataset_prepared_path: last_run_prepared |
|
output_dir: ./manners-finetune-1 |
|
|
|
sequence_len: 4096 |
|
sample_packing: true |
|
pad_to_sequence_len: true |
|
shuffle_merged_datasets: true |
|
|
|
wandb_project: mannerstral |
|
wandb_entity: |
|
wandb_watch: |
|
wandb_run_id: |
|
wandb_log_model: |
|
|
|
gradient_accumulation_steps: 6 |
|
micro_batch_size: 2 |
|
eval_batch_size: 1 |
|
num_epochs: 6 |
|
optimizer: paged_adamw_8bit |
|
lr_scheduler: cosine |
|
learning_rate: 0.000020 |
|
weight_decay: 0 |
|
# Gradient clipping max norm |
|
max_grad_norm: 1.0 |
|
noisy_embedding_alpha: 0 |
|
train_on_inputs: false |
|
group_by_length: false |
|
bf16: true |
|
fp16: false |
|
tf32: false |
|
|
|
gradient_checkpointing: unsloth |
|
early_stopping_patience: |
|
resume_from_checkpoint: |
|
logging_steps: 1 |
|
xformers_attention: |
|
flash_attention: true |
|
|
|
chat_template: chatml |
|
|
|
warmup_ratio: 0.5 |
|
auto_resume_from_checkpoints: false |
|
#warmup_ratio: 0.5 |
|
eval_steps: 10 |
|
saves_per_epoch: 1 |
|
eval_sample_packing: false |
|
save_total_limit: 3 |
|
debug: |
|
deepspeed: deepspeed_configs/zero2.json |
|
special_tokens: |
|
pad_token: "<|end_of_text|>" |
|
``` |
|
|
|
</details><br> |
|
|
|
# Mannerstral 7b |
|
|
|
A must-have for shut-in AI nerds everywhere, this LLM is a domain expert on manners and etiquette. Particularly, the manners and etiquette of the previous century, because all I had was Project Gutenberg. |
|
|
|
This model is very tightly focused on factual question answer. I find that these models can be a bit subject to leading questions... I'm working on a specific idea for a countermeasure but it will take some time. |
|
|
|
## Model Quirks |
|
|
|
- ChatML |
|
- No generalist assistant data included, but it seems capable-ish of it still |
|
- Data generated with llama 3 70b and llama 3 8b |
|
- Low temperature recommended, screenshots use 0 |
|
- No special tokens added |
|
- Subject to leading questions -- if you ask it how to politely welcome a guest in one message, and then how to politely punch someone, it will probably not correct you the second time (as opposed to possibly correcting you if you asked how to punch someone in the first message). |
|
- Prompting may be able to ameliorate this. |
|
|
|
Examples: |
|
|
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 5 |
|
- gradient_accumulation_steps: 6 |
|
- total_train_batch_size: 60 |
|
- total_eval_batch_size: 5 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_steps: 24 |
|
- num_epochs: 6 |
|
|
|
### Training results |
|
|
|
"it is considered a serious breach of etiquette to throw anyone out of a window" I think it came out all right. |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.45.1 |
|
- Pytorch 2.3.1+cu121 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.20.0 |
|
|