Not all tensors are loaded

#2
by AnanthMekaCisco - opened

error loading model: done_getting_tensors: wrong number of tensors; expected 292, got 291
llama_load_model_from_file: failed to load model
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
Traceback (most recent call last):
File "/Users/anmeka/NI-ML-POC/HuggingFace-FoundationSecModel/Foundation-Sec-8B-Q4_K_M-GGUF/model_load_test.py", line 17, in
llm = Llama(
^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/llama_cpp/llama.py", line 957, in init
self._n_vocab = self.n_vocab()
^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/llama_cpp/llama.py", line 2264, in n_vocab
return self._model.n_vocab()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.12/site-packages/llama_cpp/llama.py", line 252, in n_vocab
assert self.model is not None
^^^^^^^^^^^^^^^^^^^^^^
AssertionError

Check for Filesize and any correption :

(base) anmeka@ANMEKA-M-L8C1 Foundation-Sec-8B-Q4_K_M-GGUF % shasum -a 256 Foundation-Sec-8B-Q4_K_M-GGUF/foundation-sec-8b-q4_k_m.gguf
6883ec6480d218094cd88494fb006443c99f430d09ba26ed12ac0859c95cf7ba Foundation-Sec-8B-Q4_K_M-GGUF/foundation-sec-8b-q4_k_m.gguf
-rw-r--r-- 1 anmeka staff 4921462368 11 Jun 21:00 foundation-sec-8b-q4_k_m.gguf

Sign up or log in to comment