license: apache-2.0
language:
- en
base_model: prithivMLmods/QwQ-LCoT2-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- LCoT
- Qwen
- v2
- llama-cpp
- gguf-my-repo
datasets:
- PowerInfer/QWQ-LONGCOT-500K
- AI-MO/NuminaMath-CoT
- prithivMLmods/Math-Solve
- amphora/QwQ-LongCoT-130K
- prithivMLmods/Deepthink-Reasoning
model-index:
- name: QwQ-LCoT2-7B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 55.76
name: averaged accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 34.37
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 22.21
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.38
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.75
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.13
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
name: Open LLM Leaderboard
Triangle104/QwQ-LCoT2-7B-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from prithivMLmods/QwQ-LCoT2-7B-Instruct
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model details:
The QwQ-LCoT2-7B-Instruct is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5-7B base model and has been fine-tuned on the chain of thought reasoning datasets, focusing on chain-of-thought (CoT) reasoning for problems. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
Quickstart with Transformers
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/QwQ-LCoT2-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in strawberry." messages = [ {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning and instruction-following tasks, with specific applications including:
Instruction Following: Providing detailed and step-by-step guidance for a wide range of user queries. Logical Reasoning: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios. Text Generation: Crafting coherent, contextually relevant, and well-structured text in response to prompts. Problem-Solving: Analyzing and addressing tasks that require chain-of-thought (CoT) reasoning, making it ideal for education, tutoring, and technical support. Knowledge Enhancement: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics.
Limitations
Data Bias: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data. Context Limitation: Performance may degrade for tasks requiring knowledge or reasoning that significantly exceeds the model's pretraining or fine-tuning context. Complexity Ceiling: While optimized for multi-step reasoning, exceedingly complex or abstract problems may result in incomplete or incorrect outputs. Dependency on Prompt Quality: The quality and specificity of the user prompt heavily influence the model's responses. Non-Factual Outputs: Despite being fine-tuned for reasoning, the model can still generate hallucinated or factually inaccurate content, particularly for niche or unverified topics. Computational Requirements: Running the model effectively requires significant computational resources, particularly when generating long sequences or handling high-concurrency workloads.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q4_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q4_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q4_k_s.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q4_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q4_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q4_k_s.gguf -c 2048