Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

MaziyarPanahi
/
Mixtral-8x22B-Instruct-v0.1-GGUF

Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
16-bit
GGUF
mixtral
Mixture of Experts
conversational
Model card Files Files and versions Community
38
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Can't get any coherent result

#38 opened about 1 month ago by
ramarivera

strange errors with Mixtral-8x22B-Instruct-v0.1.Q5_K_M

#37 opened about 1 year ago by
cyrilAub

'output_norm.weight' not found

2
#36 opened about 1 year ago by
harryballantyne

Mixtral 8x22B mixing up syllables

1
#35 opened about 1 year ago by
Stefanvarunix

How to merge .gguf into one file?

2
#34 opened about 1 year ago by
Lunitaris

MistralTokenizer

3
#33 opened about 1 year ago by
Esj-DL

Llama-cpp-python meet ValueError: Failed to create llama_context

#32 opened about 1 year ago by
zhouzr

Tokenizer is match?especially for function / tool call

1
#31 opened about 1 year ago by
zhouzr

Less is more

5
#25 opened about 1 year ago by
Henk717

llama_model_load: error loading model: vocab size mismatch

👍 1
4
#8 opened about 1 year ago by
luccazen

A request

❤️ 1
4
#6 opened about 1 year ago by
Hoioi
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs