Text Generation
Transformers
Safetensors
English
Chinese
qwen3
text-generation-inference
code
math
Mixture of Experts
conversational
File size: 4,035 Bytes
9067dc1
 
 
 
 
 
 
 
 
 
 
1e1c300
 
3edaafb
1e1c300
 
 
 
 
 
 
 
 
 
cb3e849
1e1c300
cb3e849
1e1c300
cb3e849
 
1e1c300
cb3e849
 
1e1c300
cb3e849
 
1e1c300
 
cb3e849
1e1c300
cb3e849
 
1e1c300
cb3e849
 
1e1c300
cb3e849
1e1c300
 
 
 
 
 
 
 
 
 
 
 
 
cb3e849
 
1e1c300
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cb3e849
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e1c300
cb3e849
1e1c300
27534c0
1e1c300
27534c0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen3-14B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- code
- math
- moe
datasets:
- open-r1/OpenR1-Math-220k
- deepmind/math_dataset
- burtenshaw/tulu-3-sft-personas-code-no-prompt
---

![Asd.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ELHH6JVzFc9UDreFJS79D.png)

# Ophiuchi-Qwen3-14B-Instruct

> Ophiuchi-Qwen3-14B-Instruct is built upon the Qwen3-14B architecture and uses the Qwen3ForCausalLM backbone. It is instruction-tuned to enhance capabilities in mathematical reasoning, code generation, and factual accuracy. By leveraging high-quality datasets and long-context architectures, this model is designed to excel in solving complex reasoning tasks and generating accurate, structured content across multiple domains.

## Key Features

1. Mathematical and Logical Reasoning
   Fine-tuned to perform step-by-step reasoning, symbolic logic, and advanced mathematics, supporting educational and technical use cases.

2. Code Generation and Understanding
   Optimized for writing, interpreting, and debugging code across various programming languages, including Python, JavaScript, and C++.

3. Factual Integrity and Precision
   Trained on curated and aligned datasets to enhance accuracy and reduce hallucination in fact-based tasks.

4. Long-Context Support
   Capable of handling up to 128K tokens as input with output generation up to 8K tokens, enabling detailed and comprehensive responses over extended sequences.

5. Instruction-Tuned Alignment
   Demonstrates a strong ability to follow multi-step instructions, maintain conversation context, and produce structured outputs across sessions.

6. Multilingual Proficiency
   Supports over 29 languages including English, Chinese, French, Spanish, Arabic, Russian, Japanese, Korean, and others, enabling global communication and translation tasks.

## Quickstart with Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Ophiuchi-Qwen3-14B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain the principles of alignment in large language models."

messages = [
    {"role": "system", "content": "You are a highly capable assistant focused on reasoning, coding, and factual precision."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

## Intended Use

* Mathematical and symbolic problem solving
* Code generation and explanation
* Structured response generation in JSON, Markdown, or table formats
* Long-form technical writing and documentation
* Factual question answering and fact-checking
* Educational assistance across STEM domains
* Multilingual conversation and translation tasks

## Limitations

* High computational requirements (A100/H100-class GPUs recommended)
* May still produce hallucinated facts on edge cases or adversarial inputs
* Sensitive to poorly structured or ambiguous prompts
* Early-stage errors may propagate in long outputs
* Less suitable for creative fiction or subjective narrative tasks

## References

1. Analysing Mathematical Reasoning Abilities of Neural Models. arXiv:1904.01557. [https://arxiv.org/pdf/1904.01557](https://arxiv.org/pdf/1904.01557)

2. YaRN: Efficient Context Window Extension of Large Language Models. arXiv:2309.00071. [https://arxiv.org/pdf/2309.00071](https://arxiv.org/pdf/2309.00071)