Ellio98 commited on
Commit
755dde7
·
verified ·
1 Parent(s): 8f81986

End of training

Browse files
Files changed (5) hide show
  1. README.md +51 -51
  2. config.json +3 -3
  3. model.safetensors +2 -2
  4. tokenizer.json +2 -11
  5. training_args.bin +2 -2
README.md CHANGED
@@ -1,69 +1,69 @@
1
  ---
2
- library_name: transformers
3
  base_model: Ellio98/mistral-0.5B-base
 
 
4
  tags:
5
  - generated_from_trainer
6
- model-index:
7
- - name: mistral-0.5B-Instruct-v0.1
8
- results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
- # mistral-0.5B-Instruct-v0.1
15
-
16
- This model is a fine-tuned version of [Ellio98/mistral-0.5B-base](https://huggingface.co/Ellio98/mistral-0.5B-base) on an unknown dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: nan
19
 
20
- ## Model description
 
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
 
25
 
26
- More information needed
27
-
28
- ## Training and evaluation data
29
-
30
- More information needed
31
 
32
  ## Training procedure
33
 
34
- ### Training hyperparameters
35
-
36
- The following hyperparameters were used during training:
37
- - learning_rate: 0.0005
38
- - train_batch_size: 4
39
- - eval_batch_size: 4
40
- - seed: 42
41
- - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 16
43
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
44
- - lr_scheduler_type: cosine
45
- - lr_scheduler_warmup_steps: 100
46
- - num_epochs: 1
47
-
48
- ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss |
51
- |:-------------:|:------:|:----:|:---------------:|
52
- | 0.0 | 0.1000 | 125 | nan |
53
- | 0.0 | 0.2000 | 250 | nan |
54
- | 0.0 | 0.2999 | 375 | nan |
55
- | 0.0 | 0.3999 | 500 | nan |
56
- | 0.0 | 0.4999 | 625 | nan |
57
- | 0.0 | 0.5999 | 750 | nan |
58
- | 0.0 | 0.6999 | 875 | nan |
59
- | 0.0 | 0.7998 | 1000 | nan |
60
- | 0.0 | 0.8998 | 1125 | nan |
61
- | 0.0 | 0.9998 | 1250 | nan |
62
 
 
63
 
64
  ### Framework versions
65
 
66
- - Transformers 4.47.0
67
- - Pytorch 2.5.1+cu121
68
- - Datasets 3.3.1
69
- - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  base_model: Ellio98/mistral-0.5B-base
3
+ library_name: transformers
4
+ model_name: mistral-0.5B-Instruct-v0.1
5
  tags:
6
  - generated_from_trainer
7
+ - trl
8
+ - gkd
9
+ licence: license
10
  ---
11
 
12
+ # Model Card for mistral-0.5B-Instruct-v0.1
 
 
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [Ellio98/mistral-0.5B-base](https://huggingface.co/Ellio98/mistral-0.5B-base).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Ellio98/mistral-0.5B-Instruct-v0.1", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
  ## Training procedure
29
 
30
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
+ This model was trained with GKD, a method introduced in [On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes](https://huggingface.co/papers/2306.13649).
34
 
35
  ### Framework versions
36
 
37
+ - TRL: 0.15.2
38
+ - Transformers: 4.47.0
39
+ - Pytorch: 2.5.1+cu121
40
+ - Datasets: 3.3.1
41
+ - Tokenizers: 0.21.0
42
+
43
+ ## Citations
44
+
45
+ Cite GKD as:
46
+
47
+ ```bibtex
48
+ @inproceedings{agarwal2024on-policy,
49
+ title = {{On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes}},
50
+ author = {Rishabh Agarwal and Nino Vieillard and Yongchao Zhou and Piotr Stanczyk and Sabela Ramos Garea and Matthieu Geist and Olivier Bachem},
51
+ year = 2024,
52
+ booktitle = {The Twelfth International Conference on Learning Representations, {ICLR} 2024, Vienna, Austria, May 7-11, 2024},
53
+ publisher = {OpenReview.net},
54
+ url = {https://openreview.net/forum?id=3zKtaqxLhW},
55
+ }
56
+ ```
57
+
58
+ Cite TRL as:
59
+
60
+ ```bibtex
61
+ @misc{vonwerra2022trl,
62
+ title = {{TRL: Transformer Reinforcement Learning}},
63
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
64
+ year = 2020,
65
+ journal = {GitHub repository},
66
+ publisher = {GitHub},
67
+ howpublished = {\url{https://github.com/huggingface/trl}}
68
+ }
69
+ ```
config.json CHANGED
@@ -16,7 +16,7 @@
16
  "hidden_size": 1536,
17
  "initializer_range": 0.02,
18
  "intermediate_size": 4096,
19
- "max_position_embeddings": 2048,
20
  "model_type": "mistral",
21
  "num_attention_heads": 8,
22
  "num_hidden_layers": 16,
@@ -28,6 +28,6 @@
28
  "tie_word_embeddings": false,
29
  "torch_dtype": "float32",
30
  "transformers_version": "4.47.0",
31
- "use_cache": true,
32
- "vocab_size": 32000
33
  }
 
16
  "hidden_size": 1536,
17
  "initializer_range": 0.02,
18
  "intermediate_size": 4096,
19
+ "max_position_embeddings": 4096,
20
  "model_type": "mistral",
21
  "num_attention_heads": 8,
22
  "num_hidden_layers": 16,
 
28
  "tie_word_embeddings": false,
29
  "torch_dtype": "float32",
30
  "transformers_version": "4.47.0",
31
+ "use_cache": false,
32
+ "vocab_size": 32768
33
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:77612e60162ff8aaaa3dac64dbe7c603d1e3dba02a733b4017b6a0da2bec9917
3
- size 2054379856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30a5957a9d53c48936412c565e53ca82037d7b2ae88cd5b0b9237e33dd45f800
3
+ size 2063817040
tokenizer.json CHANGED
@@ -2,20 +2,11 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 512,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
- "padding": {
10
- "strategy": {
11
- "Fixed": 512
12
- },
13
- "direction": "Left",
14
- "pad_to_multiple_of": null,
15
- "pad_id": 2,
16
- "pad_type_id": 0,
17
- "pad_token": "</s>"
18
- },
19
  "added_tokens": [
20
  {
21
  "id": 0,
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 199,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
+ "padding": null,
 
 
 
 
 
 
 
 
 
10
  "added_tokens": [
11
  {
12
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:afd73705e4ffeffaeae1bb1eeceadc6314146a3eed261671e37a854cfed3fbc3
3
- size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97c4112d49a5f5502436f07c430a3c5b2116dbc352be2ae2975ebc3a342d79b8
3
+ size 5752