vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.608 ± 0.0309
strict-match 5 exact_match ↑ 0.812 ± 0.0248

vllm (pretrained=/root/autodl-tmp/Qwen2.5-14B-Gutenberg-1e-Delta,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.564 ± 0.0314
strict-match 5 exact_match ↑ 0.836 ± 0.0235
Downloads last month
2
Safetensors
Model size
14.8B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for noneUsername/Qwen2.5-14B-Gutenberg-1e-Delta-W8A8-Dynamic-Per-Token

Base model

Qwen/Qwen2.5-14B
Quantized
(14)
this model