did imatrix fail?
the i1 is still not released.
and, they also released pro and rm version.
- https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B
- https://huggingface.co/ByteDance-Seed/Seed-X-RM-7B
i don't know which is better.
Edit: I tried this model. there is either no translated content, or just repetition of first phase.
imatrix indeed failed. i'll try to queue the two other models
Seed-X-RM-7B: either llama.cpp didn't properly convert the model or it seems to be seriously hosed (e.g. tokenizer does not match the model weights)
llama_model_load: error loading model: check_tensor_dims: tensor 'token_embd.weight' has wrong shape; expected 4096, 65269, got 4096, 65272, 1, 1
and the PRO failed with nans during imatrix generation. tough world.
The RM
version is for reward modeling in reinforcement learning phase and is NOT designed for typical generative purpose, thus the llama.cpp
failing to load it seems to be a fair consequence (IMHO llama.cpp
didn't take these training-only auxiliary models into consideration, as it's a simple AI-on-Edge deployment framework, instead of a full-fledged monstrosity like vllm
).
Any chance to redo the GGUFs for this model and fix the issue? Someone mentioned here that it helps to add a special token map: https://huggingface.co/ByteDance-Seed/Seed-X-Instruct-7B/discussions/1#687f2fcc49f1113567075e9a
Would really like to try out this model in llama.cpp...
If somebody clones it under a slightly different name (e.g. Seed-X-Instruct-7B-fix) and adds the file I'll be happy to quant it - I strongly prefer having an upstream repo for nontrivial changes.