Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model:
|
4 |
+
- open-r1/OlympicCoder-7B
|
5 |
+
- Qwen/Qwen2.5-Coder-7B-Instruct
|
6 |
+
- microsoft/NextCoder-7B
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- merge
|
12 |
+
- programming
|
13 |
+
- code generation
|
14 |
+
- code
|
15 |
+
- codeqwen
|
16 |
+
- moe
|
17 |
+
- coding
|
18 |
+
- coder
|
19 |
+
- qwen2
|
20 |
+
- chat
|
21 |
+
- qwen
|
22 |
+
- qwen-coder
|
23 |
+
- mixture of experts
|
24 |
+
- qwen2moe
|
25 |
+
- 3X7B Shared.
|
26 |
+
library_name: transformers
|
27 |
+
---
|
28 |
+
|
29 |
+
(uploading, quants to follow)
|
30 |
+
|
31 |
+
<h2>Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-24B-v1</h2>
|
32 |
+
|
33 |
+
Coder MOE with 3 top coder models in a Mixture of Experts config.
|
34 |
+
|
35 |
+
Three models all working together to code.
|
36 |
+
|
37 |
+
Default config is 2 experts (of 3 activated).
|
38 |
+
|
39 |
+
SETTINGS:
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
MODELS in THIS MOE:
|
44 |
+
|
45 |
+
https://huggingface.co/open-r1/OlympicCoder-7B
|
46 |
+
|
47 |
+
https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
|
48 |
+
|
49 |
+
https://huggingface.co/microsoft/NextCoder-7B
|
50 |
+
|
51 |
+
[ more to come ]
|