mtasic85's picture
pretrain
c98613b
|
raw
history blame
4.89 kB
---
license: mit
pipeline_tag: text-generation
library_name: transformers
language: [
'en', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el',
'eo', 'es', 'et', 'eu', 'fa', 'ff', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gn', 'gu', 'ha', 'he',
'hi', 'hr', 'ht', 'hu', 'hy', 'id', 'ig', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko',
'ku', 'ky', 'la', 'lg', 'li', 'ln', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my',
'ne', 'nl', 'no', 'ns', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'qu', 'rm', 'ro', 'ru', 'sa', 'si',
'sc', 'sd', 'sk', 'sl', 'so', 'sq', 'sr', 'ss', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tn',
'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'wo', 'xh', 'yi', 'yo', 'zu',
]
datasets:
# core - base
- ontocord/fineweb-permissive-multilingual-2m
- distily/c4_multilingual_1M
- data-silence/sumnews
- xu-song/cc100-samples
- badrex/llm-emoji-dataset
- fblgit/simple-math
- Gusarich/math-expressions-1m
- neuralwork/arxiver
- christopher/rosetta-code
- nampdn-ai/tiny-codes
- JeanKaddour/minipile
# core - instruct
- NousResearch/hermes-function-calling-v1
- simplescaling/s1K-1.1
# base - instruct
- mlabonne/open-perfectblend
- allenai/tulu-3-sft-mixture
- rombodawg/Everything_Instruct_Multilingual
# base - reason
- open-r1/OpenR1-Math-220k
- open-thoughts/OpenThoughts-114k
- cognitivecomputations/dolphin-r1
- simplescaling/s1K-1.1
tags:
- chat
- core
- base
- instruct
- reason
---
# tangled-alpha-0.10-core
![logo](./misc/logo.jpg)
```bash
time python -B prepare_core_datasets.py
```
```
i=0, min_len=0, max_len=1073741824, block_size=1025, chunk_size=16400000, len(dataset)=5146620, len(dataset) * block_size=5275285500
Total number of tokens in the optimized dataset '../core-data-0-0-1073741824-1025-16000' is 5275285500
i=1, min_len=1025, max_len=2049, block_size=2049, chunk_size=16392000, len(dataset)=309838, len(dataset) * block_size=634858062
Total number of tokens in the optimized dataset '../core-data-1-1025-2049-2049-8000' is 634858062
i=2, min_len=2049, max_len=4097, block_size=4097, chunk_size=16388000, len(dataset)=113843, len(dataset) * block_size=466414771
Total number of tokens in the optimized dataset '../core-data-2-2049-4097-4097-4000' is 466414771
i=3, min_len=4097, max_len=8193, block_size=8193, chunk_size=16386000, len(dataset)=56713, len(dataset) * block_size=464649609
Total number of tokens in the optimized dataset '../core-data-3-4097-8193-8193-2000' is 464649609
i=4, min_len=8193, max_len=16385, block_size=16385, chunk_size=16385000, len(dataset)=37406, len(dataset) * block_size=612897310
Total number of tokens in the optimized dataset '../core-data-4-8193-16385-16385-1000' is 612897310
i=5, min_len=16385, max_len=32769, block_size=32769, chunk_size=16384500, len(dataset)=12737, len(dataset) * block_size=417378753
Total number of tokens in the optimized dataset '../core-data-5-16385-32769-32769-500' is 417378753
i=6, min_len=32769, max_len=65537, block_size=65537, chunk_size=16384250, len(dataset)=2824, len(dataset) * block_size=185076488
Total number of tokens in the optimized dataset '../core-data-6-32769-65537-65537-250' is 185076488
i=7, min_len=65537, max_len=131073, block_size=131073, chunk_size=16384125, len(dataset)=634, len(dataset) * block_size=83100282
Total number of tokens in the optimized dataset '../core-data-7-65537-131073-131073-125' is 83100282
real 292m54.341s
user 2118m1.154s
sys 12m2.746s
20G tangled-alpha-0.9-core/core-data-0-0-1073741824-1025-16000
2.4G tangled-alpha-0.9-core/core-data-1-1025-2049-2049-8000
1.8G tangled-alpha-0.9-core/core-data-2-2049-4097-4097-4000
1.8G tangled-alpha-0.9-core/core-data-3-4097-8193-8193-2000
2.3G tangled-alpha-0.9-core/core-data-4-8193-16385-16385-1000
1.6G tangled-alpha-0.9-core/core-data-5-16385-32769-32769-500
709M tangled-alpha-0.9-core/core-data-6-32769-65537-65537-250
321M tangled-alpha-0.9-core/core-data-7-65537-131073-131073-125
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt pretrain --config pretrain_core_model_0.yaml
```
```
```
Backup `wandb`:
```bash
mv wandb wandb-pretrain-core-0
```
Copy config:
```bash
cp ../config-0.json ../out/pretrain-core-0/final/config.json
```
Chat with model:
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True litgpt chat ../out/pretrain-core-0/final
```
```bash
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=0 PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True time litgpt evaluate --tasks 'leaderboard' --out_dir '../evaluate/pretrain-core-0/leaderboard/' --batch_size '4' --dtype 'bfloat16' '../out/pretrain-core-0/final'
```
```
```