Quantizations of BigCodeLLama LFG π
Experimental CodeLlaMA frankenstein of 70b instruct, python and base to see how it benchmarks
Models Merged
The following models were included in the merge:
- ../CodeLlama-70b-hf
- ../CodeLlama-70b-Instruct-hf
- ../CodeLlama-70b-Python-hf
Configuration
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 69]
model:
model:
path: ../CodeLlama-70b-hf
- sources:
- layer_range: [66, 76]
model:
model:
path: ../CodeLlama-70b-Instruct-hf
- sources:
- layer_range: [42, 66]
model:
model:
path: ../CodeLlama-70b-hf
- sources:
- layer_range: [13, 37]
model:
model:
path: ../CodeLlama-70b-Python-hf
- sources:
- layer_range: [10, 80]
model:
model:
path: ../CodeLlama-70b-Instruct-hf
To reunite each file after downloading do
cat BigCodeLlama-169b-q2k.gguf.part0 BigCodeLlama-169b-q2k.gguf.part1 > BigCodeLlama-169b-q2k.gguf
cat BigCodeLlama-169b-q3km.gguf.part0 BigCodeLlama-169b-q3km.gguf.part1 > BigCodeLlama-169b-q3km.gguf
cat BigCodeLlama-169b-q4ks.gguf.part0 BigCodeLlama-169b-q4ks.gguf.part1 > BigCodeLlama-169b-q4ks.gguf
cat BigCodeLlama-169b-q5km.gguf.part0 BigCodeLlama-169b-q5km.gguf.part1 BigCodeLlama-169b-q5km.gguf.part2 > BigCodeLlama-169b-q5km.gguf
cat BigCodeLlama-169b-q8.gguf.part0 BigCodeLlama-169b-q8.gguf.part1 BigCodeLlama-169b-q8.gguf.part2 BigCodeLlama-169b-q8.gguf.part3 > BigCodeLlama-169b-q8.gguf