Model Card for mudit23/Socratic-Qwen2.5-7B-v2

Model Details

Model Description

This model is a 7-billion parameter causal transformer fine-tuned from Qwen/Qwen2.5-7B-Instruct using Direct Preference Optimization (DPO) and Low-Rank Adaptation (LoRA). It has been aligned to generate multi-turn Socratic dialogues rather than direct factual answers, explicitly designed to foster higher-order thinking in K-12 learners.

  • Developed by: Mudit Jain
  • Model type: Causal transformer (instruction-tuned)
  • Language(s): English
  • License: MIT
  • Finetuned from: Qwen/Qwen2.5-7B-Instruct
  • Hosted on: Hugging Face (https://huggingface.com/mudit23/Socratic-Qwen2.5-7B-v2)
  • Dataset used: mudit23/class7-socratic-dpo

Uses

Direct Use

  • Generate Socratic prompts to guide students through inquiry-based learning.
  • Integrate into chatbots or tutoring platforms to encourage critical thinking.

Out-of-Scope Use

  • Do not use for tasks requiring direct factual answers or high-stakes decision making.
  • Not suitable for medical, legal, or safety-critical applications.

Bias, Risks, and Limitations

  • Hallucinations: May produce plausible-sounding but incorrect guidance.
  • Bias: Trained on NCERT Science material and general web data; may reflect cultural or curricular biases.
  • Engagement: Designed for educational contexts; performance may degrade outside Class 7 Science or similar domains.

Recommendations

  • Always supervise student interactions; verify any factual claims.
  • Restrict use to guided learning environments with educator oversight.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("mudit23/Socratic-Qwen2.5-7B-v2")
model = AutoModelForCausalLM.from_pretrained("mudit23/Socratic-Qwen2.5-7B-v2")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)

prompt = "Teacher: What do you think causes wind to blow?"
print(generator(prompt, max_new_tokens=100)[0]["generated_text"])

Citation

BibTeX: @misc{jain2025socraticdpo, title={Enhancing K-12 Critical Thinking through a DPO-Fine-Tuned Socratic Multi-Agent AI Tutor}, author={Jain, Mudit}, year={2025}, howpublished={\url{https://huggingface.com/mudit23/Socratic-Qwen2.5-7B-v2}} }

Downloads last month
18
Safetensors
Model size
7.62B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mudit23/Socratic-Qwen2.5-7B-v2

Base model

Qwen/Qwen2.5-7B
Finetuned
(2476)
this model
Quantizations
2 models