quantized_by: k4yt3x | |
license: llama3.3 | |
base_model: | |
- k4yt3x/Originia-Llama3.3-70B | |
library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
F16 and Q4_K_M imatrix quantized GGUF for [k4yt3x/Originia-Llama3.3-70B](https://huggingface.co/k4yt3x/Originia-Llama3.3-70B). |