๐GGUF
Collection
Llama.cpp compatible models, can be used on CPUs and GPUs!
โข
1123 items
โข
Updated
โข
41
MaziyarPanahi/Josiefied-abliteratedV4-Qwen2.5-14B-Inst-BaseMerge-TIES-GGUF contains GGUF format model files for CombinHorizon/Josiefied-abliteratedV4-Qwen2.5-14B-Inst-BaseMerge-TIES.
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
๐ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.