OSError: Can't load tokenizer for 'google/gemma-3-1b-it'.
I'm trying to run a custom app on Spaces that uses the gemma-3-1b-it
model, but I'm running into a problem when loading the tokenizer. It works fine on my local machine, but I get the following runtime error when loading it on Spaces.
Traceback (most recent call last):
File "/home/user/app/app.py", line 12, in <module>
gemma = GemmaLLM()
File "/home/user/app/Gemma_Model.py", line 28, in __init__
self.tokenizer = AutoTokenizer.from_pretrained(model_id)
File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2046, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'google/gemma-3-1b-it'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/gemma-3-1b-it' is the correct path to a directory containing all relevant files for a GemmaTokenizerFast tokenizer.
The model_id
is google/gemma-3-1b-it
which seems to load the main model just fine, but I can't find any answers online as to why this is happening with the tokenizer.
Hi @JeffMII ,
The issue happens because the tokenizer for google/gemma-3-1b-it needs some extra files that aren’t always downloaded automatically or supported in Hugging Face Spaces. To fix this, could you please try below approaches:
Approach-1: Manually download the required tokenizer files, upload them to your Space, and load the tokenizer using the local path.
Approach-2 : Try using trust_remote_code=True parameter with AutoTokenizer.from_pretrained() then check it.
Thank you.