Available GGUF versions for the PatronusAI/glider model: [BF16
, Q8_0
, Q5_K_M
, Q4_K_M
]
How to load your desired quantized model:
- Select the appropraite GGUF quantization from the available list above
- Run the following code:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("PatronusAI/glider-gguf", gguf_file="glider_{version_from_list}.gguf")
For loading the Q8_0 version, this script will change to:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("PatronusAI/glider-gguf", gguf_file="glider_Q8_0.gguf")
For any issues or questions, reach out to Darshan Deshpande or Rebecca Qian
- Downloads last month
- 170