image

Uploaded Model

Overview

This Qwen2 model has been finetuned using Unsloth and Hugging Face's TRL (Transformers Reinforcement Learning) library. The finetuning process achieved a 2x speedup compared to traditional methods.

Features

  • Optimized for text generation and inference tasks.
  • Lightweight with 4-bit quantization for efficient performance.
  • Compatible with various NLP and code-generation applications.

Acknowledgments

This model leverages Unsloth’s advanced optimization techniques to ensure faster training and inference.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 30.81
IFEval (0-Shot) 66.37
BBH (3-Shot) 46.48
MATH Lvl 5 (4-Shot) 20.77
GPQA (0-shot) 8.84
MuSR (0-shot) 9.07
MMLU-PRO (5-shot) 33.33
Downloads last month
60
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Daemontatox/CogitoZ14

Base model

Qwen/Qwen2.5-14B
Finetuned
(11)
this model
Quantizations
1 model

Dataset used to train Daemontatox/CogitoZ14

Evaluation results