TheBloke commited on
Commit
45f96a8
1 Parent(s): 3fd03ba

Initial GGUF model commit

Browse files
Files changed (1) hide show
  1. README.md +280 -0
README.md ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: llama2
4
+ model-index:
5
+ - name: Phind-CodeLlama-34B-v1
6
+ results:
7
+ - dataset:
8
+ name: HumanEval
9
+ type: openai_humaneval
10
+ metrics:
11
+ - name: pass@1
12
+ type: pass@1
13
+ value: 69.5%
14
+ verified: false
15
+ task:
16
+ type: text-generation
17
+ model_creator: Phind
18
+ model_link: https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1
19
+ model_name: Phind CodeLlama 34B Python v1
20
+ model_type: llama
21
+ quantized_by: TheBloke
22
+ tags:
23
+ - code llama
24
+ ---
25
+
26
+ <!-- header start -->
27
+ <!-- 200823 -->
28
+ <div style="width: auto; margin-left: auto; margin-right: auto">
29
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
30
+ </div>
31
+ <div style="display: flex; justify-content: space-between; width: 100%;">
32
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
34
+ </div>
35
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
36
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
37
+ </div>
38
+ </div>
39
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
40
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
41
+ <!-- header end -->
42
+
43
+ # Phind CodeLlama 34B Python v1 - GGUF
44
+ - Model creator: [Phind](https://huggingface.co/Phind)
45
+ - Original model: [Phind CodeLlama 34B Python v1](https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1)
46
+
47
+ ## Description
48
+
49
+ This repo contains GGUF format model files for [Phind's Phind CodeLlama 34B Python v1](https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1).
50
+
51
+ <!-- README_GGUF.md-about-gguf start -->
52
+ ### About GGUF
53
+
54
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
55
+
56
+ The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
57
+
58
+ As of August 25th, here is a list of clients and libraries that are known to support GGUF:
59
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
60
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
61
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
62
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
63
+ * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
64
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
65
+ * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
66
+
67
+ The clients and libraries below are expecting to add GGUF support shortly:
68
+ * [LM Studio](https://lmstudio.ai/), should be updated by end August 25th.
69
+ <!-- README_GGUF.md-about-gguf end -->
70
+
71
+ <!-- repositories-available start -->
72
+ ## Repositories available
73
+
74
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GPTQ)
75
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF)
76
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGML)
77
+ * [Phind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1)
78
+ <!-- repositories-available end -->
79
+
80
+ <!-- prompt-template start -->
81
+ ## Prompt template: Plain-with-newline
82
+
83
+ ```
84
+ {prompt} \n
85
+ ```
86
+
87
+ <!-- prompt-template end -->
88
+ <!-- compatibility_gguf start -->
89
+ ## Compatibility
90
+
91
+ These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
92
+
93
+ As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.
94
+
95
+ They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.
96
+
97
+ ## Explanation of quantisation methods
98
+ <details>
99
+ <summary>Click to see details</summary>
100
+
101
+ The new methods available are:
102
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
103
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
104
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
105
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
106
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
107
+
108
+ Refer to the Provided Files table below to see what files use which methods, and how.
109
+ </details>
110
+ <!-- compatibility_gguf end -->
111
+
112
+ <!-- README_GGUF.md-provided-files start -->
113
+ ## Provided files
114
+
115
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
116
+ | ---- | ---- | ---- | ---- | ---- | ----- |
117
+ | [phind-codellama-34b-python-v1.Q2_K.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
118
+ | [phind-codellama-34b-python-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
119
+ | [phind-codellama-34b-python-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
120
+ | [phind-codellama-34b-python-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
121
+ | [phind-codellama-34b-python-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
122
+ | [phind-codellama-34b-python-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
123
+ | [phind-codellama-34b-python-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
124
+ | [phind-codellama-34b-python-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
125
+ | [phind-codellama-34b-python-v1.Q6_K.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
126
+ | [phind-codellama-34b-python-v1.Q8_0.gguf](https://huggingface.co/TheBloke/Phind-CodeLlama-34B-Python-v1-GGUF/blob/main/phind-codellama-34b-python-v1.Q8_0.gguf) | Q8_0 | 8 | 35.79 GB| 38.29 GB | very large, extremely low quality loss - not recommended |
127
+
128
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
129
+ <!-- README_GGUF.md-provided-files end -->
130
+
131
+ <!-- README_GGUF.md-how-to-run start -->
132
+ ## How to run in `llama.cpp`
133
+
134
+ Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
135
+
136
+ For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
137
+
138
+ ```
139
+ ./main -t 10 -ngl 32 -m phind-codellama-34b-python-v1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
140
+ ```
141
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
142
+
143
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
144
+
145
+ Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
146
+
147
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
148
+
149
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
150
+
151
+ ## How to run in `text-generation-webui`
152
+
153
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
154
+ <!-- README_GGUF.md-how-to-run end -->
155
+
156
+ <!-- footer start -->
157
+ <!-- 200823 -->
158
+ ## Discord
159
+
160
+ For further support, and discussions on these models and AI in general, join us at:
161
+
162
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
163
+
164
+ ## Thanks, and how to contribute.
165
+
166
+ Thanks to the [chirper.ai](https://chirper.ai) team!
167
+
168
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
169
+
170
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
171
+
172
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
173
+
174
+ * Patreon: https://patreon.com/TheBlokeAI
175
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
176
+
177
+ **Special thanks to**: Aemon Algiz.
178
+
179
+ **Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
180
+
181
+
182
+ Thank you to all my generous patrons and donaters!
183
+
184
+ And thank you again to a16z for their generous grant.
185
+
186
+ <!-- footer end -->
187
+
188
+ <!-- original-model-card start -->
189
+ # Original model card: Phind's Phind CodeLlama 34B Python v1
190
+
191
+
192
+ # **Phind-CodeLlama-34B-Python-v1**
193
+ We've fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieve 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieves 67%. We've applied OpenAI's decontamination methodology to our dataset to ensure result validity.
194
+
195
+ More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4).
196
+
197
+ ## Model Details
198
+ This model is fine-tuned from CodeLlama-34B-Python and achieves 69.5% pass@1 on HumanEval.
199
+
200
+ ## Dataset Details
201
+ We fined-tuned on a proprietary dataset of ~80k high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. The Phind models were trained for 2 epochs, for a total of ~160k examples shown. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens.
202
+
203
+ ## How to Get Started with the Model
204
+
205
+ Make sure to install Transformers from the main git branch:
206
+
207
+ ```bash
208
+ pip install git+https://github.com/huggingface/transformers.git
209
+ ```
210
+
211
+ ## How to Prompt the Model
212
+ **Please note that this model is somewhat instruction-tuned, but not chat-tuned.**
213
+
214
+ Do not try to use the Llama chat markup with this model. Instead, simply tell it what you want and add "\n: " at the end of your task.
215
+
216
+ For example:
217
+
218
+ ```
219
+ Write me a linked list implementation: \n
220
+ ```
221
+
222
+ ## How to reproduce HumanEval Results
223
+
224
+ To reproduce our results:
225
+
226
+ ```python
227
+
228
+ from transformers import AutoTokenizer, LlamaForCausalLM
229
+ from human_eval.data import write_jsonl, read_problems
230
+ from tqdm import tqdm
231
+
232
+ # initialize the model
233
+
234
+ model_path = "Phind/Phind-CodeLlama-34B-v1"
235
+ model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto")
236
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
237
+
238
+ # HumanEval helper
239
+
240
+ def generate_one_completion(prompt: str):
241
+ tokenizer.pad_token = tokenizer.eos_token
242
+ inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096)
243
+
244
+ # Generate
245
+ generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=256, do_sample=True, top_p=0.75, top_k=40, temperature=0.1)
246
+ completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
247
+ completion = completion.replace(prompt, "").split("\n\n\n")[0]
248
+
249
+ return completion
250
+
251
+ # perform HumanEval
252
+ problems = read_problems()
253
+
254
+ num_samples_per_task = 1
255
+ samples = [
256
+ dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"]))
257
+ for task_id in tqdm(problems)
258
+ for _ in range(num_samples_per_task)
259
+ ]
260
+ write_jsonl("samples.jsonl", samples)
261
+
262
+ # run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox
263
+ ```
264
+
265
+ ## Bias, Risks, and Limitations
266
+
267
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
268
+ This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
269
+
270
+
271
+ ## Training details
272
+
273
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
274
+
275
+ - **Hardware Type:** 32x A100-80GB
276
+ - **Hours used:** 90 GPU-hours
277
+ - **Cloud Provider:** AWS
278
+ - **Compute Region:** us-east-1
279
+
280
+ <!-- original-model-card end -->