Is it possible to make this exact model in GGUF?
#4 opened about 1 year ago
by
Goldenblood56
Works with 16 GB RAM, 8 GB VRAM, BUT...
#3 opened over 1 year ago
by
MrDevolver
Run guanaco model with llama.cpp, get gibberish output.
3
#2 opened over 1 year ago
by
xiaojinchuan
Using CPU only
3
#1 opened over 1 year ago
by
BBLL3456