Using Prompt Template
#13 opened 8 months ago
by
fredrohn

No tokenizer available?
1
#10 opened about 1 year ago
by
dspyrhsu

How is it possible that Q4_K_M performs better than any Q5, Q6 and even Q8?
1
#8 opened about 1 year ago
by
alexcardo
[AUTOMATED] Model Memory Requirements
#7 opened over 1 year ago
by
model-sizer-bot
Failed to create LLM 'zephyr' from '/models/zephyr-7b-alpha.Q5_K_M.gguf'.
#6 opened over 1 year ago
by
whoknowsmeinhf
Addressing Inconsistencies in Model Outputs: Understanding and Solutions
#5 opened over 1 year ago
by
shivammehta
zhapyer
#4 opened over 1 year ago
by
bharathi1604
Will there be a re-upload of this model?
2
#3 opened over 1 year ago
by
SolidSnacke

Free and ready to use zephyr-7B-beta-GGUF model as OpenAI API compatible endpoint
12
#2 opened over 1 year ago
by
limcheekin
Possible Loading Error with GPT4All
11
#1 opened over 1 year ago
by
deleted