So just one of the additional fun experiments we're doing on the path to Shisa V2.1.

The DPO of shisa-ai/030-swallow-8b-0.5-base-v2new-sft

As expected, while 030 got its half-day in the sun, 031 is obviously better and we actually get slightly higher than expected gains form the DPO. Based on these preliminary results, 031 appears to be the current strongest 8B-class Japanese model yet. Here are the GPT-4.1 judged Shaberi results of our latest tune:

Model Average ELYZA 100 JA-MT Rakuda Tengu
031-swallow-8b-0.5-base-v2new-dpo405b 8.18 8.22 8.15 9.03 7.33
030-swallow-8b-0.5-base-v2new-sft 8.00 8.00 7.82 8.98 7.19
tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.5 7.96 7.78 8.03 8.95 7.06
025-qwen3-8b-v2new-dpo405b 7.87 8.22 8.35 8.05 6.87
021-qwen3-8b-v2new-sft 7.79 7.98 8.10 8.05 7.04
024-llama3.1-8b-v2new-dpo405b 7.60 7.58 7.57 8.25 7.01
022-llama3.1-8b-v2new-sft 7.48 7.62 7.53 7.90 6.88
Qwen/Qwen3-8B 7.47 7.58 8.05 7.65 6.60
meta-llama/Llama-3.1-8B-Instruct 5.89 6.96 5.68 5.20 5.73

Note: the Swallow team has released the Swallow 0.5 base model under the Llama 3.3 Community License and Gemma Terms of Use and as a derived work we inherit the model. Please carefully read and respect the licenses if they apply to you.

Compute sponsored by Hot Aisle and AMD.

Downloads last month
31
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for shisa-ai/031-swallow-8b-0.5-base-v2new-dpo405b