GPT-OSS-ZhTW-Thinking-MXFP4-MOE-GGUF

Model on HuggingFace License

A specialized language model optimized for thinking in Traditional Chinese (Taiwanese Mandarin).

This is a quantized GGUF version of the GPT-OSS-ZhTW-Thinking model.

Converted via llama.cpp b6316

🌟 Key Features

  • Native Taiwanese Mandarin Thinking: Default reasoning and thinking patterns optimized for Traditional Chinese
  • Enhanced Cultural Understanding: Deep comprehension of Taiwanese cultural contexts, idioms, and social nuances
  • GPT-based Architecture: Standard GPT-OSS transformer architecture fine-tuned for zh-TW applications

πŸ“Š Model Specifications

  • Model Size: 120B parameters
  • Architecture: GPT-based MoE transformer
  • Training: Fine-tuned for Traditional Chinese (zh-TW)

πŸš€ Usage

Serving with vllm or sglang.

πŸ“ License

This model is released under the Apache 2.0 License.

🀝 Contributing

We welcome contributions and feedback! Please open an issue or submit a pull request if you have suggestions for improvements.


Made with ❀️ by FreeSEED-AI

Downloads last month
31
GGUF
Model size
117B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hydaitw/gpt-oss-120b-mandarin-thinking

Quantized
(45)
this model