QWQ R1 [Reasoning] Distill 1.5B CoT

QWQ R1 [Reasoning] Distill 1.5B CoT is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5 R1 Distill from the DeepSeek base model and has been fine-tuned on chain-of-thought (CoT) reasoning datasets, focusing on CoT reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.

Quickstart with Transformers

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/QwQ-R1-Distill-1.5B-CoT"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many r in strawberry."
messages = [
    {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Intended Use

QWQ R1 [Reasoning] Distill 1.5B CoT is specifically designed for tasks requiring advanced reasoning, structured thinking, and detailed explanations. Its intended applications include:

  1. Instruction-Following Tasks: Performing step-by-step tasks based on user instructions.
  2. Logical Reasoning: Solving problems that demand multi-step logical processing and inference.
  3. Text Generation: Crafting coherent and contextually appropriate text for various domains.
  4. Educational Tools: Assisting in learning environments, providing explanations for complex topics, or guiding through reasoning exercises.
  5. Problem-Solving: Addressing computational or real-world problems requiring chain-of-thought reasoning.
  6. AI-Assisted Decision-Making: Supporting users in making informed decisions with logical analysis.

Limitations

While the model excels in reasoning and explanation tasks, it has certain constraints:

  1. Context Length: Limited ability to process or generate outputs for inputs exceeding its maximum token limit.
  2. Domain Knowledge: It may lack detailed expertise in niche domains not covered during training.
  3. Dependence on Training Data: Performance can be influenced by biases or gaps in the datasets it was fine-tuned on.
  4. Real-Time Reasoning: Struggles with tasks requiring dynamic understanding of real-time data or rapidly changing contexts.
  5. Mathematical Precision: May produce errors in calculations or fail to interpret ambiguous mathematical problems.
  6. Factual Accuracy: Occasionally generates incorrect or outdated information when dealing with facts.
  7. Language Nuances: Subtle linguistic or cultural nuances might be misunderstood or misrepresented.
  8. Complex CoT Chains: For extremely lengthy or convoluted reasoning chains, the model may lose track of earlier context or steps.
Downloads last month
78
Safetensors
Model size
1.78B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for prithivMLmods/QwQ-R1-Distill-1.5B-CoT

Finetuned
(22)
this model
Merges
1 model
Quantizations
2 models

Datasets used to train prithivMLmods/QwQ-R1-Distill-1.5B-CoT

Collection including prithivMLmods/QwQ-R1-Distill-1.5B-CoT