gpt-oss-20B Chess Analysis (GGUF)

  • Quantization: q8_0
  • Converted with: python llama.cpp/convert_hf_to_gguf.py gpt-oss-20b-hf --outfile gpt-oss-20b-q8_0.gguf --outtype q8_0
  • Intended for chess analysis workloads with llama.cpp-compatible runtimes.
Downloads last month
623
GGUF
Model size
20.9B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train HillPhelmuth/gpt-oss-20B-chess-analysis-GGUF