[Study] Tiny-ko
Collection
https://github.com/minpeter/tiny-ko
•
3 items
•
Updated
axolotl version: 0.10.0.dev0
base_model: minpeter/pretrained-tiny-ko
chat_template: chatml
datasets:
- path: lemon-mint/Korean-FineTome-100k
type: chat_template
split: train[:10%]
field_messages: messages
message_property_mappings:
role: role
content: content
- path: lemon-mint/smol-koreantalk
type: chat_template
split: train[:10%]
field_messages: messages
message_property_mappings:
role: role
content: content
- path: heegyu/open-korean-instructions-v20231020
type: chat_template
split: train[:10%]
field_messages: conversations
message_property_mappings:
role: from
content: value
roles:
user: ["human", "user"]
assistant: ["gpt", "assistant", "bot"]
system: ["system", "input"]
# NOTE: https://github.com/FreedomIntelligence/MultilingualSIFT
- path: FreedomIntelligence/evol-instruct-korean
type: chat_template
split: train[:10%]
field_messages: conversations
message_property_mappings:
role: from
content: value
- path: FreedomIntelligence/alpaca-gpt4-korean
type: chat_template
split: train[:10%]
field_messages: conversations
message_property_mappings:
role: from
content: value
- path: FreedomIntelligence/sharegpt-korean
type: chat_template
split: train[:10%]
field_messages: conversations
message_property_mappings:
role: from
content: value
- path: coastral/korean-writing-style-instruct
type: chat_template
split: train[:10%]
field_messages: conversations
message_property_mappings:
role: from
content: value
- path: devngho/korean-instruction-mix
type: chat_template
split: train[:10%]
field_messages: messages
message_property_mappings:
role: from
content: value
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
hub_model_id: minpeter/ko-tiny-exp
output_dir: ./ouputs/ko-tiny-exp
wandb_project: "axolotl"
wandb_entity: "kasfiekfs-e"
save_steps: 200
warmup_steps: 20
eval_steps: 200
sequence_len: 2048
# <<<< experimental settings <<<<
sample_packing: false
train_on_inputs: true
# >>>> experimental settings >>>
pad_to_sequence_len: true
gradient_accumulation_steps: 4
micro_batch_size: 16
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-3
bf16: auto
tf32: false
added_tokens_overrides:
128001: "<|im_end|>"
128002: "<|im_start|>"
special_tokens:
bos_token: <|begin_of_text|>
eos_token: <|im_end|>
pad_token: <|im_end|>
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
flash_attention: true
num_epochs: 3
weight_decay: 0.0
This model is a fine-tuned version of minpeter/pretrained-tiny-ko on the lemon-mint/Korean-FineTome-100k, the lemon-mint/smol-koreantalk, the heegyu/open-korean-instructions-v20231020, the FreedomIntelligence/evol-instruct-korean, the FreedomIntelligence/alpaca-gpt4-korean, the FreedomIntelligence/sharegpt-korean, the coastral/korean-writing-style-instruct and the devngho/korean-instruction-mix datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.8061 | 0.0010 | 1 | 2.8887 |
1.9625 | 0.2019 | 200 | 1.9494 |
1.8455 | 0.4037 | 400 | 1.8601 |
1.7395 | 0.6056 | 600 | 1.8045 |
1.7769 | 0.8075 | 800 | 1.7490 |
1.5135 | 1.0091 | 1000 | 1.7116 |
1.5928 | 1.2110 | 1200 | 1.6860 |
1.5322 | 1.4128 | 1400 | 1.6517 |
1.4939 | 1.6147 | 1600 | 1.6218 |
1.4406 | 1.8166 | 1800 | 1.5939 |
1.3999 | 2.0182 | 2000 | 1.5841 |
1.3449 | 2.2200 | 2200 | 1.5770 |
1.2352 | 2.4219 | 2400 | 1.5723 |
1.3043 | 2.6238 | 2600 | 1.5702 |
1.3467 | 2.8256 | 2800 | 1.5699 |
Base model
minpeter/pretrained-tiny-ko