Not compatible with transformers library
Getting error when using transformers library to run this model:
OSError: Can't load tokenizer for 'unsloth/Qwen3-30B-A3B-GGUF'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'unsloth/Qwen3-30B-A3B-GGUF' is the correct path to a directory containing all relevant files for a Qwen2TokenizerFast tokenizer.
are you using the latest version?
I am getting ValueError: GGUF model with architecture qwen3moe is not supported yet.
on:
tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename) # same with Qwen2TokenizerFast
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
(both of them) - I am running on transformers-4.52.0.dev0 - am I missing something or is it just waiting time?
I tried to run the GGUF qwen3 model with vllm v0.8.5, but also found
"ValueError: GGUF model with architecture qwen3 is not supported yet."