aashish1904 commited on
Commit
1880798
1 Parent(s): c76ad6d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - nbeerbower/mistral-nemo-gutenberg-12B-v4
6
+ - NeverSleep/Lumimaid-v0.2-12B
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+
12
+
13
+ ---
14
+
15
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
16
+
17
+
18
+ # QuantFactory/NarraThinker12B-GGUF
19
+ This is quantized version of [ClaudioItaly/NarraThinker12B](https://huggingface.co/ClaudioItaly/NarraThinker12B) created using llama.cpp
20
+
21
+ # Original Model Card
22
+
23
+ # merge
24
+
25
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
26
+
27
+ ## Merge Details
28
+ ### Merge Method
29
+
30
+ This model was merged using the SLERP merge method.
31
+
32
+ ### Models Merged
33
+
34
+ The following models were included in the merge:
35
+ * [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4)
36
+ * [NeverSleep/Lumimaid-v0.2-12B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B)
37
+
38
+ ### Configuration
39
+
40
+ The following YAML configuration was used to produce this model:
41
+
42
+ ```yaml
43
+ models:
44
+ - model: NeverSleep/Lumimaid-v0.2-12B
45
+ - model: nbeerbower/mistral-nemo-gutenberg-12B-v4
46
+ merge_method: slerp
47
+ base_model: nbeerbower/mistral-nemo-gutenberg-12B-v4
48
+ dtype: bfloat16
49
+ parameters:
50
+ t: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.2, 0.2, 0.2, 0.3, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9]
51
+ layers: [0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100, 105, 110]
52
+ tokenizer_merge_method: slerp
53
+ tokenizer_parameters:
54
+ t: 0.2
55
+ ```
56
+