RyanYr commited on
Commit
5d9d52c
·
verified ·
1 Parent(s): b038a0e

Model save

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- base_model: RyanYr/reflect_mini8Bit_om2-460k_sft-t1
3
  library_name: transformers
4
- model_name: reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b0.5
5
  tags:
6
  - generated_from_trainer
7
  - trl
@@ -9,9 +9,9 @@ tags:
9
  licence: license
10
  ---
11
 
12
- # Model Card for reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b0.5
13
 
14
- This model is a fine-tuned version of [RyanYr/reflect_mini8Bit_om2-460k_sft-t1](https://huggingface.co/RyanYr/reflect_mini8Bit_om2-460k_sft-t1).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT1-Om2G8kOm2AgIpsdpIter1T1_b0.5", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/0jnfhtyj)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
1
  ---
2
+ base_model: RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6
3
  library_name: transformers
4
+ model_name: reflect_mini8B_Om2SftT2_Om2G8kOm2AgIpsdpIter1T02_b1.0
5
  tags:
6
  - generated_from_trainer
7
  - trl
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for reflect_mini8B_Om2SftT2_Om2G8kOm2AgIpsdpIter1T02_b1.0
13
 
14
+ This model is a fine-tuned version of [RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6](https://huggingface.co/RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT2_Om2G8kOm2AgIpsdpIter1T02_b1.0", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/8zzk5a6s)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
last_checkpoint/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "RyanYr/reflect_mini8Bit_om2-460k_sft-t1",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
last_checkpoint/model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:525ff610191554c89a15b5ef18a2fc3196519f5878fb5978b1499328c7fafaa0
3
  size 4983016096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6bdd8545c9ad01574273dd0b3c8deeef5ee81d1b6ac3edd60be5072effa3632
3
  size 4983016096
last_checkpoint/model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a05f6450585d459c438497c9a94318a70c470d81c786163c5c9f9e78f8a214a3
3
  size 4999836776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc85fb01321d7b15b030ffef518f86f6736d11b9248e55220d863a0c51227686
3
  size 4999836776
last_checkpoint/model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:651fa244689ad64bf4a6463df0c9cebc3b5aa4a6c43a495e8f8d82c4285198e4
3
  size 4983067960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39e68854d7294897189b9fc00c8e53ab23ac18af7778ec9a64c6a13832d95d04
3
  size 4983067960
last_checkpoint/model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8b2d574e2f43b04f4f7276e31c1967110c642020c8774dbb117ae7341087e12
3
  size 1073750144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd398d5a65a825334f17678e08d8313e3c7975c5523e3d081787973a7a049db9
3
  size 1073750144
last_checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8e59085d8d6d8a265bdac9bbd1110865c36847f1d93dc1e4b5be688a06556a8
3
  size 8056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d4fa864da1ac1fe7f4bff3f2d1c0068a8366f7c27cefb89717f8fefeb9fa2f5
3
  size 8056