please release a model AWQ for it : Baichuan-M1-14B-Instruct-AWQ
1
#5 opened about 2 months ago
by
classdemo
Support for older GPUs (pre-Ampere)
#4 opened 3 months ago
by
qwq38b
Require support for macOS mps support/ollama support
#3 opened 3 months ago
by
robbie-wx
[Finetuning Code] Align-Anything support Baichuan-M1
2
#2 opened 3 months ago
by
XuyaoWang

Requesting Support for GGUF Quantization of Baichuan-M1-14B-Instruct through llama.cpp
5
3
#1 opened 3 months ago
by
Doctor-Chad-PhD
