franknoh commited on
Commit
382a746
ยท
1 Parent(s): aea4674

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -2,4 +2,70 @@
2
  license: apache-2.0
3
  language:
4
  - ko
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  language:
4
  - ko
5
+ ---
6
+ # MPTK-1B
7
+
8
+ MPTK-1B๋Š” ํ•œ๊ตญ์–ด/์˜์–ด์ฝ”๋“œ ๋ฐ์ดํ„ฐ์…‹์—์„œ ํ•™์Šต๋œ 1.3B ํŒŒ๋ผ๋ฏธํ„ฐ์˜ decoder-only transformer ์–ธ์–ด๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
9
+
10
+ ์ด ๋ชจ๋ธ์€ ๊ตฌ๊ธ€์˜ [TPU Research Cloud(TRC)](https://sites.research.google/trc/about/)๋ฅผ ํ†ตํ•ด ์ง€์›๋ฐ›์€ Cloud TPU๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ ๋‹ค๋ฅธ decoder-only transformer์—์„œ ์ผ๋ถ€ ์ˆ˜์ •๋œ ์•„ํ‚คํ…์ฒ˜์ธ MPT๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค.
17
+
18
+ - [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค
19
+ - bias๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
20
+
21
+ | Hyperparameter | Value |
22
+ |-----------------|-------|
23
+ | n_parameters | 1.3B |
24
+ | n_layers | 24 |
25
+ | n_heads | 16 |
26
+ | d_model | 2048 |
27
+ | vocab size | 50432 |
28
+ | sequence length | 2048 |
29
+
30
+ ## Uses
31
+
32
+ ## How to Get Started with the Model
33
+
34
+ fp16์œผ๋กœ ์‹คํ–‰ ์‹œ NaN์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ fp32 ํ˜น์€ bf16๋กœ ์‹คํ–‰ํ•˜๊ธฐ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค.
35
+
36
+ ```python
37
+ import torch
38
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("team-lucid/mptk-1b")
41
+ model = AutoModelForCausalLM.from_pretrained("team-lucid/mptk-1b")
42
+
43
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
44
+
45
+ with torch.autocast('cuda', dtype=torch.bfloat16):
46
+ print(
47
+ pipe(
48
+ '๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š”',
49
+ max_new_tokens=100,
50
+ do_sample=True,
51
+ )
52
+ )
53
+
54
+ ```
55
+
56
+ ## Training Details
57
+
58
+ ### Training Data
59
+
60
+ [OSCAR](https://oscar-project.org/), mC4, wikipedia, namuwiki ๋“ฑ ํ•œ๊ตญ์–ด
61
+ ๋ฐ์ดํ„ฐ์— [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), [The Stack](https://huggingface.co/datasets/bigcode/the-stack)
62
+ ์—์„œ ์ผ๋ถ€๋ฅผ ์ถ”๊ฐ€ํ•ด ํ•™์Šตํ•˜์˜€์Šต๋‹ˆ๋‹ค.
63
+
64
+ #### Training Hyperparameters
65
+
66
+ | **Hyperparameter** | **Value** |
67
+ |--------------------|------------|
68
+ | Precision | bfloat16 |
69
+ | Optimizer | Lion |
70
+ | Learning rate | 2e-4 |
71
+ | Batch size | 1024 |