Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,17 @@ pipeline_tag: text-generation
|
|
4 |
library_name: transformers
|
5 |
tags:
|
6 |
- text-generation-inference
|
|
|
|
|
|
|
7 |
language:
|
8 |
- en
|
|
|
|
|
9 |
---
|
10 |
-
|
11 |
-
|
|
|
12 |
# **Monocerotis-V838-14B**
|
13 |
|
14 |
> Monocerotis-V838-14B is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
|
|
|
4 |
library_name: transformers
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
+
- Math
|
8 |
+
- Code
|
9 |
+
- Qwen
|
10 |
language:
|
11 |
- en
|
12 |
+
base_model:
|
13 |
+
- prithivMLmods/Sombrero-Opus-14B-Sm4
|
14 |
---
|
15 |
+
|
16 |
+

|
17 |
+
|
18 |
# **Monocerotis-V838-14B**
|
19 |
|
20 |
> Monocerotis-V838-14B is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
|