Text Generation
Transformers
PyTorch
Safetensors
German
llama
text-generation-inference
JanPf commited on
Commit
c86fe69
·
verified ·
1 Parent(s): 3845cee

whoops, undo => copy paste error

Browse files
Files changed (1) hide show
  1. README.md +31 -6
README.md CHANGED
@@ -9,35 +9,54 @@ library_name: transformers
9
  license: other
10
  ---
11
 
12
- # LLäMmlein 7B
13
- LLäMmlein 7B is a German LLaMa model trained from scratch using our adapted [Tinyllama](https://github.com/jzhang38/TinyLlama) codebase on the German portion of [RedPajama V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).
 
 
14
  To enhance data quality, we additionally deduplicated the dataset on paragraph level and filtered it using a token-to-word ratio filter. The resulting dataset can be found [here](https://huggingface.co/datasets/LSX-UniWue/LLaMmlein-Dataset).
 
15
  We provide three model sizes:
16
- * [LLäMmlein 7B](https://huggingface.co/LSX-UniWue/LLaMmlein_7B) ← You are here
17
- * [LLäMmlein 1B](https://huggingface.co/LSX-UniWue/LLaMmlein_1B)
 
 
 
18
  * [LLäMmlein 120M](https://huggingface.co/LSX-UniWue/LLaMmlein_120M)
 
 
19
  Find more details on our page our [page](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/) and our [preprint](https://arxiv.org/abs/2411.11171)!
 
 
20
  ### Usage
 
21
  You can use LLäMmlein with the `transformers` library.
22
  (Optional: install `flash-attn` to achieve highest efficiency.)
 
23
  ```python
24
  from transformers import AutoTokenizer, AutoModelForCausalLM
25
- model_id = "LSX-UniWue/LLaMmlein_7B"
 
26
  tokenizer = AutoTokenizer.from_pretrained(model_id)
27
  model = AutoModelForCausalLM.from_pretrained(model_id)
28
  ```
 
 
29
  ### Intermediate Checkpoints
30
  In addition to the final model checkpoint, we publish intermediate checkpoints throughout the full training process as unique branches in this repository.
31
  A specific checkpoint can be loaded like this:
 
32
  ```python
33
  from transformers import AutoTokenizer, AutoModelForCausalLM
34
- model_id = "LSX-UniWue/LLaMmlein_7B"
 
35
  revision = "iter-00420000-ckpt"
36
  tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
37
  model = AutoModelForCausalLM.from_pretrained(model_id, revision=revision)
38
  ```
 
39
  Next to the model itself each branch contains all datapoints that were used to train the model up to that point.
40
  In the correspinding folder, named after the checkpoint, you can find several `.log` files (depending on the number of GPUs) of the following format:
 
41
  ```json
42
  {"time": 1739809392.679516,
43
  "iter_num": 0,
@@ -45,8 +64,14 @@ In the correspinding folder, named after the checkpoint, you can find several `.
45
  "file_id": [0, 0, 0, 0, 0, 0, 0, 0],
46
  "process_rank": 0}
47
  ```
 
 
48
  Note: Our earlier models from the paper, which do not include data logging, are available at:
49
  * [LLäMmlein 1B prerelease](https://huggingface.co/LSX-UniWue/LLaMmlein_1B_prerelease)
 
50
  * [LLäMmlein 120M prerelease](https://huggingface.co/LSX-UniWue/LLaMmlein_120M_prerelease)
 
 
 
51
  ### License
52
  We release the LLäMmlein models under a research-only RAIL-M license. See [license.md](./license.md) for details.
 
9
  license: other
10
  ---
11
 
12
+ # LLäMmlein 1B
13
+
14
+
15
+ LLäMmlein 1B is a German LLaMa model trained from scratch using our adapted [Tinyllama](https://github.com/jzhang38/TinyLlama) codebase on the German portion of [RedPajama V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2).
16
  To enhance data quality, we additionally deduplicated the dataset on paragraph level and filtered it using a token-to-word ratio filter. The resulting dataset can be found [here](https://huggingface.co/datasets/LSX-UniWue/LLaMmlein-Dataset).
17
+
18
  We provide three model sizes:
19
+
20
+ * [LLäMmlein 7B](https://huggingface.co/LSX-UniWue/LLaMmlein_7B)
21
+
22
+ * [LLäMmlein 1B](https://huggingface.co/LSX-UniWue/LLaMmlein_1B) ← You are here
23
+
24
  * [LLäMmlein 120M](https://huggingface.co/LSX-UniWue/LLaMmlein_120M)
25
+
26
+
27
  Find more details on our page our [page](https://www.informatik.uni-wuerzburg.de/datascience/projects/nlp/llammlein/) and our [preprint](https://arxiv.org/abs/2411.11171)!
28
+
29
+
30
  ### Usage
31
+
32
  You can use LLäMmlein with the `transformers` library.
33
  (Optional: install `flash-attn` to achieve highest efficiency.)
34
+
35
  ```python
36
  from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ model_id = "LSX-UniWue/LLaMmlein_1B"
39
  tokenizer = AutoTokenizer.from_pretrained(model_id)
40
  model = AutoModelForCausalLM.from_pretrained(model_id)
41
  ```
42
+
43
+
44
  ### Intermediate Checkpoints
45
  In addition to the final model checkpoint, we publish intermediate checkpoints throughout the full training process as unique branches in this repository.
46
  A specific checkpoint can be loaded like this:
47
+
48
  ```python
49
  from transformers import AutoTokenizer, AutoModelForCausalLM
50
+
51
+ model_id = "LSX-UniWue/LLaMmlein_1B"
52
  revision = "iter-00420000-ckpt"
53
  tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
54
  model = AutoModelForCausalLM.from_pretrained(model_id, revision=revision)
55
  ```
56
+
57
  Next to the model itself each branch contains all datapoints that were used to train the model up to that point.
58
  In the correspinding folder, named after the checkpoint, you can find several `.log` files (depending on the number of GPUs) of the following format:
59
+
60
  ```json
61
  {"time": 1739809392.679516,
62
  "iter_num": 0,
 
64
  "file_id": [0, 0, 0, 0, 0, 0, 0, 0],
65
  "process_rank": 0}
66
  ```
67
+
68
+
69
  Note: Our earlier models from the paper, which do not include data logging, are available at:
70
  * [LLäMmlein 1B prerelease](https://huggingface.co/LSX-UniWue/LLaMmlein_1B_prerelease)
71
+
72
  * [LLäMmlein 120M prerelease](https://huggingface.co/LSX-UniWue/LLaMmlein_120M_prerelease)
73
+
74
+
75
+
76
  ### License
77
  We release the LLäMmlein models under a research-only RAIL-M license. See [license.md](./license.md) for details.