Upload README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license_link: LICENSE
|
|
6 |
license_name: deepseek
|
7 |
model_creator: DeepSeek
|
8 |
model_name: Deepseek Coder 33B Instruct
|
9 |
-
model_type:
|
10 |
prompt_template: 'You are an AI programming assistant, utilizing the Deepseek Coder
|
11 |
model, developed by Deepseek Company, and you only answer questions related to computer
|
12 |
science. For politically sensitive questions, security and privacy issues, and other
|
@@ -84,15 +84,8 @@ You are an AI programming assistant, utilizing the Deepseek Coder model, develop
|
|
84 |
```
|
85 |
|
86 |
<!-- prompt-template end -->
|
87 |
-
<!-- licensing start -->
|
88 |
-
## Licensing
|
89 |
|
90 |
-
The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
|
91 |
|
92 |
-
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
|
93 |
-
|
94 |
-
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [DeepSeek's Deepseek Coder 33B Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct).
|
95 |
-
<!-- licensing end -->
|
96 |
<!-- README_AWQ.md-provided-files start -->
|
97 |
## Provided files, and AWQ parameters
|
98 |
|
@@ -362,9 +355,9 @@ And thank you again to a16z for their generous grant.
|
|
362 |
|
363 |
### 1. Introduction of Deepseek Coder
|
364 |
|
365 |
-
Deepseek Coder
|
366 |
|
367 |
-
- **Massive Training Data**: Trained on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
|
368 |
|
369 |
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
|
370 |
|
|
|
6 |
license_name: deepseek
|
7 |
model_creator: DeepSeek
|
8 |
model_name: Deepseek Coder 33B Instruct
|
9 |
+
model_type: deepseek
|
10 |
prompt_template: 'You are an AI programming assistant, utilizing the Deepseek Coder
|
11 |
model, developed by Deepseek Company, and you only answer questions related to computer
|
12 |
science. For politically sensitive questions, security and privacy issues, and other
|
|
|
84 |
```
|
85 |
|
86 |
<!-- prompt-template end -->
|
|
|
|
|
87 |
|
|
|
88 |
|
|
|
|
|
|
|
|
|
89 |
<!-- README_AWQ.md-provided-files start -->
|
90 |
## Provided files, and AWQ parameters
|
91 |
|
|
|
355 |
|
356 |
### 1. Introduction of Deepseek Coder
|
357 |
|
358 |
+
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
|
359 |
|
360 |
+
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
|
361 |
|
362 |
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
|
363 |
|