CultriX commited on
Commit
612f386
·
verified ·
1 Parent(s): 8d8cddb

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +25 -0
  2. adapter_config.json +26 -0
  3. adapter_model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
4
+ - Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1
5
+ library_name: peft
6
+ tags:
7
+ - mergekit
8
+ - peft
9
+
10
+ ---
11
+ # Qwen2.5-14B-MUSR-LoRA-R32
12
+
13
+ This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
14
+
15
+ ## LoRA Details
16
+
17
+ This LoRA adapter was extracted from [Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1](https://huggingface.co/Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1) and uses [Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4) as a base.
18
+
19
+ ### Parameters
20
+
21
+ The following command was used to extract this LoRA adapter:
22
+
23
+ ```sh
24
+ mergekit/scripts/extract_lora.py --model Triangle104/DS-R1-Distill-Q2.5-14B-Harmony_V0.1 --base-model Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4 --out-path Qwen2.5-14B-MUSR-LoRA-R32 --cuda --no-lazy-unpickle --safe-serialization --trust-remote-code --read-to-gpu --copy-tokenizer --allow-crimes
25
+ ```
adapter_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "base_model_name_or_path": "Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
3
+ "peft_type": "LORA",
4
+ "use_rslora": false,
5
+ "target_modules": [
6
+ "gate_proj",
7
+ "o_proj",
8
+ "q_proj",
9
+ "up_proj",
10
+ "k_proj",
11
+ "lm_head",
12
+ "v_proj",
13
+ "down_proj"
14
+ ],
15
+ "modules_to_save": [
16
+ "embed_tokens"
17
+ ],
18
+ "task_type": "CAUSAL_LM",
19
+ "r": 128,
20
+ "lora_alpha": 128,
21
+ "rank_pattern": {},
22
+ "alpha_pattern": {},
23
+ "lora_dropout": 0.0,
24
+ "fan_in_fan_out": false,
25
+ "inference_mode": true
26
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b9dcf7472bf6f430a4af79df94f1919fa9eb6a5d584e3f52af7ad0e1f4aaaa0
3
+ size 2699180904