apepkuss79 commited on
Commit
6cee3a3
1 Parent(s): 36eeaa6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -25
README.md CHANGED
@@ -33,35 +33,21 @@ tags:
33
 
34
  ## Run with LlamaEdge
35
 
36
- - LlamaEdge version: coming soon
37
-
38
- <!-- - LlamaEdge version: [v0.14.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.3) -->
39
 
40
  - Prompt template
41
 
42
- - Prompt type: `qwen-2.5-coder`
43
 
44
  - Prompt string
45
 
46
- - File-Level Code Completion (Fill in the middle)
47
-
48
- ```text
49
- <|fim_prefix|>{prefix_code}<|fim_suffix|>{suffix_code}<|fim_middle|>
50
- ```
51
-
52
- *Reference: https://github.com/QwenLM/Qwen2.5-Coder?tab=readme-ov-file#3-file-level-code-completion-fill-in-the-middle*
53
-
54
- - Repository-Level Code Completion
55
-
56
- ```text
57
- <|repo_name|>{repo_name}
58
- <|file_sep|>{file_path1}
59
- {file_content1}
60
- <|file_sep|>{file_path2}
61
- {file_content2}
62
- ```
63
-
64
- *Reference: https://github.com/QwenLM/Qwen2.5-Coder?tab=readme-ov-file#4-repository-level-code-completion*
65
 
66
  - Context size: `128000`
67
 
@@ -71,7 +57,7 @@ tags:
71
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-Coder-7B-Instruct-Q5_K_M.gguf \
72
  llama-api-server.wasm \
73
  --model-name Qwen2.5-Coder-7B-Instruct \
74
- --prompt-template qwen-2.5-coder \
75
  --ctx-size 128000
76
  ```
77
 
@@ -80,7 +66,7 @@ tags:
80
  ```bash
81
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-Coder-7B-Instruct-Q5_K_M.gguf \
82
  llama-chat.wasm \
83
- --prompt-template qwen-2.5-coder \
84
  --ctx-size 128000
85
  ```
86
 
 
33
 
34
  ## Run with LlamaEdge
35
 
36
+ - LlamaEdge version: [v0.14.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.3)
 
 
37
 
38
  - Prompt template
39
 
40
+ - Prompt type: `chatml`
41
 
42
  - Prompt string
43
 
44
+ ```text
45
+ <|im_start|>system
46
+ {system_message}<|im_end|>
47
+ <|im_start|>user
48
+ {prompt}<|im_end|>
49
+ <|im_start|>assistant
50
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  - Context size: `128000`
53
 
 
57
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-Coder-7B-Instruct-Q5_K_M.gguf \
58
  llama-api-server.wasm \
59
  --model-name Qwen2.5-Coder-7B-Instruct \
60
+ --prompt-template chatml \
61
  --ctx-size 128000
62
  ```
63
 
 
66
  ```bash
67
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2.5-Coder-7B-Instruct-Q5_K_M.gguf \
68
  llama-chat.wasm \
69
+ --prompt-template chatml \
70
  --ctx-size 128000
71
  ```
72