|
--- |
|
language: |
|
- multilingual |
|
tags: |
|
- code-generation |
|
- transformers |
|
license: mit |
|
--- |
|
<div align="center"> |
|
<img src="https://raw.githubusercontent.com/Anditty/OASIS/refs/heads/main/Group.svg" width="60%" alt="Kwaipilot" /> |
|
</div> |
|
<hr> |
|
|
|
# Kwaipilot KwaiCoder-23B-A4B-v1 |
|
|
|
## 1.Model Details |
|
|
|
**Introduction** |
|
|
|
KwaiCoder-23BA4-v1 is the latest open-source self-developed code completion model from the Kwaipilot team at Kuaishou. The training of the model relies on an efficient training approach proposed by the Kwaipilot team. By incorporating techniques such as model pruning, knowledge distillation, and fine-grained merging, the training of the 23B-wide MoE architecture code completion model was achieved at 1/30 of the cost compared to traditional methods. It has also set new SOTA benchmarks across multiple code-related evaluation datasets. |
|
|
|
**Performance** |
|
<div align="center"> |
|
<img src="https://raw.githubusercontent.com/binglinchengxiash0514/Megatron-LM/refs/heads/main/images/WX20250124-114002%402x.png"/> |
|
</div> |
|
<hr> |
|
|
|
## 2.Usage |
|
|
|
**Code Completion** |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
model_id = "Kwaipilot/KwaiCoder-23B-A4B-v1" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id,trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16,trust_remote_code=True) |
|
text = "#write a quick sort algorithm" |
|
inputs = tokenizer(text, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, max_new_tokens=80) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(text):]) |
|
``` |
|
**Code Insertion** |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
model_id = "Kwaipilot/KwaiCoder-23B-A4B-v1" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id,trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16,trust_remote_code=True) |
|
text = """<|fim▁begin|>def find_longest_substring(s): |
|
seen = {} |
|
max_length = 0 |
|
start = 0 |
|
<|fim▁hole|> |
|
if char in seen and seen[char] >= start: |
|
start = seen[char] + 1 |
|
seen[char] = end |
|
max_length = max(max_length, end - start + 1) |
|
return max_length<|fim▁end|>""" |
|
inputs = tokenizer(text, return_tensors="pt").to(model.device) |
|
outputs = model.generate(**inputs, max_new_tokens=80) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(text):]) |
|
``` |
|
|
|
## 3.License |
|
This code repository is licensed under the MIT License. |
|
|
|
## 4.BibTex |
|
```BibTex |
|
@misc{kwaicoder, |
|
title = {KwaiCoder: Code mathematical abilities comprehensive improvement.}, |
|
author = {Kwaipilot team}, |
|
year = {2024}, |
|
} |
|
``` |