just all sao10k model test, MOE merge, have fun

Update: ok, it a good model, with all Sao10K L3 RP model feature, hope you guys enjoy it.

My GGUF repo (only have Q4_K_M, I'm so lazy): https://huggingface.co/Alsebay/SaoRPM-2x8B-beta-GGUF

Thank mradermacher for quanting GGUF: https://huggingface.co/mradermacher/SaoRPM-2x8B-GGUF

Imatrix version: https://huggingface.co/mradermacher/SaoRPM-2x8B-i1-GGUF

Downloads last month
19
Safetensors
Model size
13.7B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Alsebay/SaoRPM-2x8B

Finetuned
(8)
this model
Quantizations
2 models

Collection including Alsebay/SaoRPM-2x8B