https://huggingface.co/Puchify/PuchifyT1-Ultra-4B
It's queued! :D
Awesome you are continue releasing models aligned using the S.A.F.E framework. They really do set new standards in safe and responsible AI models AI models. It is also highly appreciated that based on the README.md matadata you are now releasing them using the openrail license.
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#PuchifyT1-Ultra-4B-GGUF for quants to appear.
Static quants successfully completed and are available under https://huggingface.co/mradermacher/PuchifyT1-Ultra-4B-GGUF/tree/main
Weighted/imatrix quants failed due to NaNs. This is probably caused by a llama.cpp bug and unrelated to your model or the way we quant. I will likely retry it once https://github.com/ggml-org/llama.cpp/pull/9400 is merged (which should happen within the next hours/days) but seems unlikely to fix it.
llama_model_loader: - type f32: 145 tensors
llama_model_loader: - type f16: 253 tensors
================================ Have weights data with 252 entries
[ 1/ 398] output_norm.weight - [ 2560, 1, 1, 1], type = f32, size = 0.010 MB
[ 2/ 398] token_embd.weight - [ 2560, 151936, 1, 1], type = f16,
====== llama_model_quantize_impl: did not find weights for token_embd.weight
converting to q6_K .. load_imatrix: imatrix dataset='imatrix-training-full-3'
load_imatrix: loaded 252 importance matrix entries from PuchifyT1-Ultra-4B-i1-GGUF/imatrix.dat computed on 318 chunks
prepare_imatrix: have 252 importance matrix entries
size = 741.88 MiB -> 304.28 MiB
[ 3/ 398] blk.0.attn_k.weight - [ 2560, 1024, 1, 1], type = f16, converting to q2_K .. ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
ggml_validate_row_data: found nan value at block 0
llama_model_quantize: failed to quantize: quantized data validation failed
main: failed to quantize model from './PuchifyT1-Ultra-4B.gguf'