marcelone's picture
Update README.md
142b987 verified
metadata
license: apache-2.0
base_model: Jinx-org/Jinx-gpt-oss-20b
base_model_relation: quantized

In these quantized versions of the model, most of the layers were shrunk to save space using MXFP4. The main difference is in the “gate” layers (ffn_gate_exps.weight) that decide which experts the model uses:

  • Q4_1 version (≈12 GB): These gate layers were made smaller and faster, but their decisions can be slightly less precise.
  • Q8_0 version (≈15 GB): These gate layers keep more detail, so the model makes more accurate choices, but the file is bigger and a bit slower.

All other layers are treated the same in both versions.