LLAMA2 7B Guanaco Pico Adapter
This is a 8Bit Quantized adapter over llama2-7b-chat-hf checkpoint. To use the merged version of this model refer: manojkumarvohra/llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned => https://huggingface.co/manojkumarvohra/llama2-7B-Chat-hf-8bit-guanaco-pico-finetuned This is only meant for learning purpose and is not recommended to be used for any business purpose.
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
PEFT 0.4.0
PEFT 0.4.0
- Downloads last month
- 24
Model tree for manojkumarvohra/llama2-7B-Chat-8bit-guanaco-pico-adapter-hf
Base model
meta-llama/Llama-2-7b-chat-hf