munish0838 commited on
Commit
9c882a9
1 Parent(s): 8b49335

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: codegeex4
4
+ license_link: https://huggingface.co/THUDM/codegeex4-all-9b/blob/main/LICENSE
5
+ language:
6
+ - zh
7
+ - en
8
+ tags:
9
+ - glm
10
+ - codegeex
11
+ - thudm
12
+ inference: false
13
+ pipeline_tag: text-generation
14
+ base_model: THUDM/codegeex4-all-9b
15
+ ---
16
+
17
+ # QuantFactory/codegeex4-all-9b-GGUF
18
+ This is quantized version of [THUDM/codegeex4-all-9b](https://huggingface.co/THUDM/codegeex4-all-9b) created using llama.cpp
19
+
20
+ # Model Description
21
+ ## CodeGeeX4: Open Multilingual Code Generation Model
22
+
23
+ [中文](./README_zh.md)
24
+
25
+ We introduce CodeGeeX4-ALL-9B, the open-source version of the latest CodeGeeX4 model series. It is a multilingual code generation model continually trained on the [GLM-4-9B](https://github.com/THUDM/GLM-4), significantly enhancing its code generation capabilities. Using a single CodeGeeX4-ALL-9B model, it can support comprehensive functions such as code completion and generation, code interpreter, web search, function call, repository-level code Q&A, covering various scenarios of software development. CodeGeeX4-ALL-9B has achieved highly competitive performance on public benchmarks, such as [BigCodeBench](https://huggingface.co/datasets/bigcode/bigcodebench) and [NaturalCodeBench](https://github.com/THUDM/NaturalCodeBench). It is currently the most powerful code generation model with less than 10B parameters, even surpassing much larger general-purpose models, achieving the best balance in terms of inference speed and model performance.
26
+
27
+ ## Get Started
28
+
29
+ Use `4.39.0<=transformers<=4.40.2` to quickly launch [codegeex4-all-9b](https://huggingface.co/THUDM/codegeex2-6b):
30
+
31
+ ```python
32
+ import torch
33
+ from transformers import AutoTokenizer, AutoModelForCausalLM
34
+
35
+ device = "cuda" if torch.cuda.is_available() else "cpu"
36
+ tokenizer = AutoTokenizer.from_pretrained("THUDM/codegeex4-all-9b", trust_remote_code=True)
37
+ model = AutoModelForCausalLM.from_pretrained(
38
+ "THUDM/codegeex4-all-9b",
39
+ torch_dtype=torch.bfloat16,
40
+ low_cpu_mem_usage=True,
41
+ trust_remote_code=True
42
+ ).to(device).eval()
43
+ inputs = tokenizer.apply_chat_template([{"role": "user", "content": "write a quick sort"}], add_generation_prompt=True, tokenize=True, return_tensors="pt", return_dict=True ).to(device)
44
+ with torch.no_grad():
45
+ outputs = model.generate(**inputs)
46
+ outputs = outputs[:, inputs['input_ids'].shape[1]:]
47
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
48
+ ```
49
+
50
+ ## Evaluation
51
+
52
+ | **Model** | **Seq Length** | **HumanEval** | **MBPP** | **NCB** | **LCB** | **HumanEvalFIM** | **CRUXEval-O** |
53
+ |-----------------------------|----------------|---------------|----------|---------|---------|------------------|----------------|
54
+ | Llama3-70B-intruct | 8K | 77.4 | 82.3 | 37.0 | 27.4 | - | - |
55
+ | DeepSeek Coder 33B Instruct | 16K | 81.1 | 80.4 | 39.3 | 29.3 | 78.2 | 49.9 |
56
+ | Codestral-22B | 32K | 81.1 | 78.2 | 46.0 | 35.3 | 91.6 | 51.3 |
57
+ | CodeGeeX4-All-9B | 128K | 82.3 | 75.7 | 40.4 | 28.5 | 85.0 | 47.1 |
58
+
59
+ ## Model License
60
+
61
+ The model weights are licensed under the following [License](./LICENSE).
62
+
63
+ ## Model Citation
64
+
65
+ If you find our work helpful, please feel free to cite the following paper:
66
+
67
+ ```
68
+ @inproceedings{zheng2023codegeex,
69
+ title={CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X},
70
+ author={Qinkai Zheng and Xiao Xia and Xu Zou and Yuxiao Dong and Shan Wang and Yufei Xue and Zihan Wang and Lei Shen and Andi Wang and Yang Li and Teng Su and Zhilin Yang and Jie Tang},
71
+ booktitle={Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
72
+ pages={5673--5684},
73
+ year={2023}
74
+ }
75
+ ```