File size: 13,580 Bytes
d3186b0 c98613b d3186b0 c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 286a68d c98613b 916253a e9d9da3 b0c8cf7 c98613b dd4c10e c98613b 9fbd906 72813f2 cfd14c8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 |
---
license: mit
pipeline_tag: text-generation
library_name: transformers
language: [
'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
]
datasets:
# core - base
- ontocord/fineweb-permissive-multilingual-2m
- distily/c4_multilingual_1M
- data-silence/sumnews
- xu-song/cc100-samples
- badrex/llm-emoji-dataset
- fblgit/simple-math
- Gusarich/math-expressions-1m
- neuralwork/arxiver
- christopher/rosetta-code
- nampdn-ai/tiny-codes
- JeanKaddour/minipile
# core - instruct
- NousResearch/hermes-function-calling-v1
- simplescaling/s1K-1.1
# base - instruct
- mlabonne/open-perfectblend
- allenai/tulu-3-sft-mixture
- rombodawg/Everything_Instruct_Multilingual
# base - reason
- open-r1/OpenR1-Math-220k
- open-thoughts/OpenThoughts-114k
- cognitivecomputations/dolphin-r1
- simplescaling/s1K-1.1
tags:
- chat
- core
- base
- instruct
- reason
---
# tangled-alpha-0.10-core

```bash
time python -B prepare_core_datasets.py
```
```
i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=10913927, len(dataset) * block_size=11186775175
Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 11186775175
i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=893465, len(dataset) * block_size=1830709785
Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 1830709785
i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=375104, len(dataset) * block_size=1536801088
Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 1536801088
i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=177522, len(dataset) * block_size=1454437746
Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 1454437746
i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=77725, len(dataset) * block_size=1273524125
Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 1273524125
i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=22931, len(dataset) * block_size=751425939
Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 751425939
i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=4988, len(dataset) * block_size=326898556
Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 326898556
i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=1137, len(dataset) * block_size=149030001
Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 149030001
42G ../core-data-0-0-1073741824-1025-16000
6.9G ../core-data-1-1025-2049-2049-8000
5.8G ../core-data-2-2049-4097-4097-4000
5.5G ../core-data-3-4097-8193-8193-2000
4.8G ../core-data-4-8193-16385-16385-1000
2.9G ../core-data-5-16385-32769-32769-500
1.3G ../core-data-6-32769-65537-65537-250
573M ../core-data-7-65537-131073-131073-125
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml
```
```
Seed set to 23
Time to instantiate model: 0.21 seconds.
Total parameters: 402,703,104
Verifying settings ...
Measured TFLOPs: 42432.35
Epoch 1 | iter 64 step 1 | loss train: 11.984, val: n/a | iter time: 460.76 ms (step) remaining time: 12 days, 3:41:55
Epoch 1 | iter 128 step 2 | loss train: 11.979, val: n/a | iter time: 402.83 ms (step) remaining time: 9 days, 0:57:24
Epoch 1 | iter 192 step 3 | loss train: 11.983, val: n/a | iter time: 403.46 ms (step) remaining time: 8 days, 0:12:58
Epoch 1 | iter 256 step 4 | loss train: 11.983, val: n/a | iter time: 403.39 ms (step) remaining time: 7 days, 11:52:07
Epoch 1 | iter 320 step 5 | loss train: 11.979, val: n/a | iter time: 403.85 ms (step) remaining time: 7 days, 4:28:33
Epoch 1 | iter 384 step 6 | loss train: 11.978, val: n/a | iter time: 403.93 ms (step) remaining time: 6 days, 23:33:15
Epoch 1 | iter 448 step 7 | loss train: 11.978, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 20:02:28
Epoch 1 | iter 512 step 8 | loss train: 11.973, val: n/a | iter time: 403.80 ms (step) remaining time: 6 days, 17:24:49
Epoch 1 | iter 576 step 9 | loss train: 11.972, val: n/a | iter time: 403.23 ms (step) remaining time: 6 days, 15:21:59
Epoch 1 | iter 640 step 10 | loss train: 11.967, val: n/a | iter time: 403.38 ms (step) remaining time: 6 days, 13:43:53
# ...
Epoch 2 | iter 1364224 step 21316 | loss train: 2.805, val: 2.809 | iter time: 404.72 ms (step) remaining time: 0:00:06
Validating ...
Final evaluation | val loss: 2.809 | val ppl: 16.592
Saving checkpoint to '../out/pretrain-core-0/final/lit_model.pth'
----------------------------------------
| Performance
| - Total tokens : 11,186,768,000
| - Training Time : 53900.17 s
| - Tok/sec : 34385052.80 tok/s
| ----------------------------------------
```
Backup `wandb`:
```bash
mv wandb wandb-pretrain-core-0
```
Copy config:
```bash
cp ../config-0.json ../out/pretrain-core-0/final/config.json
```
Chat with model:
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final'
```
```
Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------|
|leaderboard | N/A| | | | | | | |
| - leaderboard_bbh | N/A| | | | | | | |
| - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.4680|± |0.0316|
| - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.5187|± |0.0366|
| - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.2080|± |0.0257|
| - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.3760|± |0.0307|
| - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.5320|± |0.0316|
| - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.1160|± |0.0203|
| - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5160|± |0.0317|
| - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.2000|± |0.0253|
| - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.1280|± |0.0212|
| - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.3440|± |0.0301|
| - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.2400|± |0.0271|
| - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.4200|± |0.0313|
| - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.0560|± |0.0146|
| - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.2260|± |0.0347|
| - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.1520|± |0.0228|
| - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.2080|± |0.0257|
| - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.2240|± |0.0264|
| - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.4831|± |0.0376|
| - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.4640|± |0.0316|
| - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2520|± |0.0275|
| - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.1720|± |0.0239|
| - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1480|± |0.0225|
| - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3320|± |0.0298|
| - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4880|± |0.0317|
| - leaderboard_gpqa | N/A| | | | | | | |
| - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2071|± |0.0289|
| - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2619|± |0.0188|
| - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2545|± |0.0206|
| - leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.2710|± | N/A|
| | |none | 0|inst_level_strict_acc |↑ |0.2626|± | N/A|
| | |none | 0|prompt_level_loose_acc |↑ |0.1165|± |0.0138|
| | |none | 0|prompt_level_strict_acc|↑ |0.1128|± |0.0136|
| - leaderboard_math_hard | N/A| | | | | | | |
| - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.0194|± |0.0040|
| - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.0148|± |0.0055|
| - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.0042|± |0.0029|
| - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.0111|± |0.0035|
| - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.0056|± |0.0032|
| - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.0161|± |0.0043|
| - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.0092|± |0.0041|
| - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.1184|± |0.0029|
| - leaderboard_musr | N/A| | | | | | | |
| - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5240|± |0.0316|
| - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.2344|± |0.0265|
| - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3000|± |0.0290|
```
```bash
litgpt convert_pretrained_checkpoint ../out/pretrain-core-0/final ../out/pretrain-core-0/checkpoint
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_1.yaml
```
```bash
litgpt convert_pretrained_checkpoint ../out/pretrain-core-1/final ../out/pretrain-core-1/checkpoint
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_2.yaml
```
|