|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- zh |
|
base_model: |
|
- Qwen/Qwen3-14B |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- text-generation-inference |
|
- code |
|
- math |
|
- moe |
|
datasets: |
|
- open-r1/OpenR1-Math-220k |
|
- deepmind/math_dataset |
|
- burtenshaw/tulu-3-sft-personas-code-no-prompt |
|
--- |
|
|
|
 |
|
|
|
# Ophiuchi-Qwen3-14B-Instruct |
|
|
|
> Ophiuchi-Qwen3-14B-Instruct is built upon the Qwen3-14B architecture and uses the Qwen3ForCausalLM backbone. It is instruction-tuned to enhance capabilities in mathematical reasoning, code generation, and factual accuracy. By leveraging high-quality datasets and long-context architectures, this model is designed to excel in solving complex reasoning tasks and generating accurate, structured content across multiple domains. |
|
|
|
## Key Features |
|
|
|
1. Mathematical and Logical Reasoning |
|
Fine-tuned to perform step-by-step reasoning, symbolic logic, and advanced mathematics, supporting educational and technical use cases. |
|
|
|
2. Code Generation and Understanding |
|
Optimized for writing, interpreting, and debugging code across various programming languages, including Python, JavaScript, and C++. |
|
|
|
3. Factual Integrity and Precision |
|
Trained on curated and aligned datasets to enhance accuracy and reduce hallucination in fact-based tasks. |
|
|
|
4. Long-Context Support |
|
Capable of handling up to 128K tokens as input with output generation up to 8K tokens, enabling detailed and comprehensive responses over extended sequences. |
|
|
|
5. Instruction-Tuned Alignment |
|
Demonstrates a strong ability to follow multi-step instructions, maintain conversation context, and produce structured outputs across sessions. |
|
|
|
6. Multilingual Proficiency |
|
Supports over 29 languages including English, Chinese, French, Spanish, Arabic, Russian, Japanese, Korean, and others, enabling global communication and translation tasks. |
|
|
|
## Quickstart with Transformers |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_name = "prithivMLmods/Ophiuchi-Qwen3-14B-Instruct" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype="auto", |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
prompt = "Explain the principles of alignment in large language models." |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a highly capable assistant focused on reasoning, coding, and factual precision."}, |
|
{"role": "user", "content": prompt} |
|
] |
|
|
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
generated_ids = model.generate( |
|
**model_inputs, |
|
max_new_tokens=512 |
|
) |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
print(response) |
|
``` |
|
|
|
## Intended Use |
|
|
|
* Mathematical and symbolic problem solving |
|
* Code generation and explanation |
|
* Structured response generation in JSON, Markdown, or table formats |
|
* Long-form technical writing and documentation |
|
* Factual question answering and fact-checking |
|
* Educational assistance across STEM domains |
|
* Multilingual conversation and translation tasks |
|
|
|
## Limitations |
|
|
|
* High computational requirements (A100/H100-class GPUs recommended) |
|
* May still produce hallucinated facts on edge cases or adversarial inputs |
|
* Sensitive to poorly structured or ambiguous prompts |
|
* Early-stage errors may propagate in long outputs |
|
* Less suitable for creative fiction or subjective narrative tasks |
|
|
|
## References |
|
|
|
1. Analysing Mathematical Reasoning Abilities of Neural Models. arXiv:1904.01557. [https://arxiv.org/pdf/1904.01557](https://arxiv.org/pdf/1904.01557) |
|
|
|
2. YaRN: Efficient Context Window Extension of Large Language Models. arXiv:2309.00071. [https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071) |
|
|