Brianpu commited on
Commit
2e7f783
·
verified ·
1 Parent(s): d7fb1fb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen1.5-0.5B
3
+ language:
4
+ - en
5
+ license: other
6
+ license_name: tongyi-qianwen-research
7
+ license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - pretrained
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ ---
14
+
15
+ *Produced by [Antigma Labs](https://antigma.ai)*
16
+ ## llama.cpp quantization
17
+ Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5165">b4944</a> for quantization.
18
+ Original model: https://huggingface.co/Qwen/Qwen1.5-0.5B
19
+ Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
20
+ ## Prompt format
21
+ ```
22
+ <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
23
+ ```
24
+ ## Download a file (not the whole branch) from below:
25
+ | Filename | Quant type | File Size | Split |
26
+ | -------- | ---------- | --------- | ----- |
27
+ | [qwen1.5-0.5b-q4_k_m.gguf](https://huggingface.co/Brianpu/Qwen1.5-0.5B-GGUF/blob/main/qwen1.5-0.5b-q4_k_m.gguf)|Q4_K_M|0.38 GB|False|
28
+
29
+ ## Downloading using huggingface-cli
30
+ <details>
31
+ <summary>Click to view download instructions</summary>
32
+ First, make sure you have hugginface-cli installed:
33
+ ```
34
+ pip install -U "huggingface_hub[cli]"
35
+
36
+ ```
37
+ Then, you can target the specific file you want:
38
+
39
+ ```
40
+ huggingface-cli download https://huggingface.co/Brianpu/Qwen1.5-0.5B-GGUF --include "qwen1.5-0.5b-q4_k_m.gguf" --local-dir ./
41
+
42
+ ```
43
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
44
+
45
+ ```
46
+ huggingface-cli download https://huggingface.co/Brianpu/Qwen1.5-0.5B-GGUF --include "qwen1.5-0.5b-q4_k_m.gguf/*" --local-dir ./
47
+
48
+ ```
49
+ You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)
50
+ </details>