BenevolenceMessiah
commited on
Commit
β’
792f838
1
Parent(s):
d3f5fac
Update README.md
Browse files
README.md
CHANGED
@@ -52,3 +52,140 @@ or
|
|
52 |
```
|
53 |
./llama-server --hf-repo BenevolenceMessiah/Yi-Coder-9B-Chat-Instruct-TIES-Q8_0-GGUF --hf-file yi-coder-9b-chat-instruct-ties-q8_0.gguf -c 2048
|
54 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
```
|
53 |
./llama-server --hf-repo BenevolenceMessiah/Yi-Coder-9B-Chat-Instruct-TIES-Q8_0-GGUF --hf-file yi-coder-9b-chat-instruct-ties-q8_0.gguf -c 2048
|
54 |
```
|
55 |
+
---
|
56 |
+
base_model:
|
57 |
+
- 01-ai/Yi-Coder-9B-Chat
|
58 |
+
- 01-ai/Yi-Coder-9B
|
59 |
+
library_name: transformers
|
60 |
+
tags:
|
61 |
+
- mergekit
|
62 |
+
- merge
|
63 |
+
license: apache-2.0
|
64 |
+
---
|
65 |
+
# merge
|
66 |
+
|
67 |
+
# This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
68 |
+
|
69 |
+
## Merge Details
|
70 |
+
### Merge Method
|
71 |
+
|
72 |
+
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [01-ai/Yi-Coder-9B](https://huggingface.co/01-ai/Yi-Coder-9B) as a base.
|
73 |
+
|
74 |
+
### Models Merged
|
75 |
+
|
76 |
+
The following models were included in the merge:
|
77 |
+
* [01-ai/Yi-Coder-9B-Chat](https://huggingface.co/01-ai/Yi-Coder-9B-Chat)
|
78 |
+
|
79 |
+
### Configuration
|
80 |
+
|
81 |
+
The following YAML configuration was used to produce this model:
|
82 |
+
|
83 |
+
```yaml
|
84 |
+
models:
|
85 |
+
- model: 01-ai/Yi-Coder-9B
|
86 |
+
parameters:
|
87 |
+
density: 0.5
|
88 |
+
weight: 0.5
|
89 |
+
- model: 01-ai/Yi-Coder-9B-Chat
|
90 |
+
parameters:
|
91 |
+
density: 0.5
|
92 |
+
weight: 0.5
|
93 |
+
|
94 |
+
merge_method: ties
|
95 |
+
base_model: 01-ai/Yi-Coder-9B
|
96 |
+
parameters:
|
97 |
+
normalize: false
|
98 |
+
int8_mask: true
|
99 |
+
dtype: float16
|
100 |
+
```
|
101 |
+
<picture>
|
102 |
+
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="120px">
|
103 |
+
</picture>
|
104 |
+
|
105 |
+
</div>
|
106 |
+
|
107 |
+
<p align="center">
|
108 |
+
<a href="https://github.com/01-ai">π GitHub</a> β’
|
109 |
+
<a href="https://discord.gg/hYUwWddeAu">πΎ Discord</a> β’
|
110 |
+
<a href="https://twitter.com/01ai_yi">π€ Twitter</a> β’
|
111 |
+
<a href="https://github.com/01-ai/Yi-1.5/issues/2">π¬ WeChat</a>
|
112 |
+
<br/>
|
113 |
+
<a href="https://arxiv.org/abs/2403.04652">π Paper</a> β’
|
114 |
+
<a href="https://01-ai.github.io/">πͺ Tech Blog</a> β’
|
115 |
+
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">π FAQ</a> β’
|
116 |
+
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">π Learning Hub</a>
|
117 |
+
</p>
|
118 |
+
|
119 |
+
# Intro
|
120 |
+
|
121 |
+
Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
|
122 |
+
|
123 |
+
Key features:
|
124 |
+
- Excelling in long-context understanding with a maximum context length of 128K tokens.
|
125 |
+
- Supporting 52 major programming languages:
|
126 |
+
```bash
|
127 |
+
'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
|
128 |
+
```
|
129 |
+
|
130 |
+
For model details and benchmarks, see [Yi-Coder blog](https://01-ai.github.io/) and [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
|
131 |
+
|
132 |
+
<p align="left">
|
133 |
+
<img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/yi-coder-calculator-demo.gif?raw=true" alt="demo1" width="500"/>
|
134 |
+
</p>
|
135 |
+
|
136 |
+
# Models
|
137 |
+
|
138 |
+
| Name | Type | Length | Download |
|
139 |
+
|--------------------|------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
|
140 |
+
| Yi-Coder-9B-Chat | Chat | 128K | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B-Chat) |
|
141 |
+
| Yi-Coder-1.5B-Chat | Chat | 128K | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B-Chat) |
|
142 |
+
| Yi-Coder-9B | Base | 128K | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B) |
|
143 |
+
| Yi-Coder-1.5B | Base | 128K | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B) |
|
144 |
+
| |
|
145 |
+
|
146 |
+
# Benchmarks
|
147 |
+
|
148 |
+
As illustrated in the figure below, Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%.
|
149 |
+
|
150 |
+
<p align="left">
|
151 |
+
<img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/bench1.webp?raw=true" alt="bench1" width="1000"/>
|
152 |
+
</p>
|
153 |
+
|
154 |
+
# Quick Start
|
155 |
+
|
156 |
+
You can use transformers to run inference with Yi-Coder models (both chat and base versions) as follows:
|
157 |
+
```python
|
158 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
159 |
+
|
160 |
+
device = "cuda" # the device to load the model onto
|
161 |
+
model_path = "01-ai/Yi-Coder-9B-Chat"
|
162 |
+
|
163 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
164 |
+
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto").eval()
|
165 |
+
|
166 |
+
prompt = "Write a quick sort algorithm."
|
167 |
+
messages = [
|
168 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
169 |
+
{"role": "user", "content": prompt}
|
170 |
+
]
|
171 |
+
text = tokenizer.apply_chat_template(
|
172 |
+
messages,
|
173 |
+
tokenize=False,
|
174 |
+
add_generation_prompt=True
|
175 |
+
)
|
176 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(device)
|
177 |
+
|
178 |
+
generated_ids = model.generate(
|
179 |
+
model_inputs.input_ids,
|
180 |
+
max_new_tokens=1024,
|
181 |
+
eos_token_id=tokenizer.eos_token_id
|
182 |
+
)
|
183 |
+
generated_ids = [
|
184 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
185 |
+
]
|
186 |
+
|
187 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
188 |
+
print(response)
|
189 |
+
```
|
190 |
+
|
191 |
+
For getting up and running with Yi-Coder series models quickly, see [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
|