GPT-OSS-ZhTW-Thinking-MXFP4-MOE-GGUF
A specialized language model optimized for thinking in Traditional Chinese (Taiwanese Mandarin).
This is a quantized GGUF version of the GPT-OSS-ZhTW-Thinking model.
Converted via llama.cpp b6316
π Key Features
- Native Taiwanese Mandarin Thinking: Default reasoning and thinking patterns optimized for Traditional Chinese
- Enhanced Cultural Understanding: Deep comprehension of Taiwanese cultural contexts, idioms, and social nuances
- GPT-based Architecture: Standard GPT-OSS transformer architecture fine-tuned for zh-TW applications
π Model Specifications
- Model Size: 120B parameters
- Architecture: GPT-based MoE transformer
- Training: Fine-tuned for Traditional Chinese (zh-TW)
π Usage
π License
This model is released under the Apache 2.0 License.
π€ Contributing
We welcome contributions and feedback! Please open an issue or submit a pull request if you have suggestions for improvements.
Made with β€οΈ by FreeSEED-AI
- Downloads last month
- 31
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for hydaitw/gpt-oss-120b-mandarin-thinking
Base model
openai/gpt-oss-120b