marcelone's picture
Update README.md
08a0944 verified
metadata
license: llama3.1
language:
  - en
  - ja
base_model:
  - tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5
base_model_relation: quantized

Swallow-8B-it-v05-gguf-q6_k-mixed-v1

  • Quantization Type: Mixed Precision (q5_K, q6_K, q8_0)
  • Bits Per Weight (BPW): 7.13

Swallow-8B-it-v05-gguf-q6_k-mixed-v2

  • Quantization Type: Mixed Precision (q6_K, q8_0)
  • Bits Per Weight (BPW): 7.50

Swallow-8B-it-v05-gguf-q8_0-mixed-v1

  • Quantization Type: Mixed Precision (bf16, q4_K, q5_K, q6_K, q8_0)
  • Bits Per Weight (BPW): 8.01

Swallow-8B-it-v05-gguf-q8_0-mixed-v2

  • Quantization Type: Mixed Precision (bf16, q5_K, q6_K, q8_0)
  • Bits Per Weight (BPW): 9.31

Swallow-8B-it-v05-gguf-q8_0-mixed-v3

  • Quantization Type: Mixed Precision (bf16, q6_K, q8_0)
  • Bits Per Weight (BPW): 11.44

Swallow-8B-it-v05-gguf-q8_0-mixed-v4

  • Quantization Type: Mixed Precision (bf16, q8_0)
  • Bits Per Weight (BPW): 13.38