apepkuss79 commited on
Commit
76c7081
·
verified ·
1 Parent(s): 9ccd8f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -27,7 +27,7 @@ pipeline_tag: image-text-to-text
27
 
28
  - LlamaEdge version: coming soon
29
 
30
- <!-- - LlamaEdge version: [v0.13.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.13.0) and above
31
 
32
  - Prompt template
33
 
@@ -40,11 +40,11 @@ pipeline_tag: image-text-to-text
40
  {user_message}<end_of_turn>
41
  <start_of_turn>model
42
  {model_message}<end_of_turn>model
43
- ``` -->
44
 
45
  - Context size: `128000`
46
 
47
- <!-- - Run as LlamaEdge service
48
 
49
  ```bash
50
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-3-4b-it-Q5_K_M.gguf \
@@ -62,7 +62,7 @@ pipeline_tag: image-text-to-text
62
  llama-chat.wasm \
63
  --prompt-template gemma-instruct \
64
  --ctx-size 128000
65
- ``` -->
66
 
67
  ## Quantized GGUF Models
68
 
 
27
 
28
  - LlamaEdge version: coming soon
29
 
30
+ <!-- - LlamaEdge version: [v0.13.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.13.0) and above -->
31
 
32
  - Prompt template
33
 
 
40
  {user_message}<end_of_turn>
41
  <start_of_turn>model
42
  {model_message}<end_of_turn>model
43
+ ```
44
 
45
  - Context size: `128000`
46
 
47
+ - Run as LlamaEdge service
48
 
49
  ```bash
50
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:gemma-3-4b-it-Q5_K_M.gguf \
 
62
  llama-chat.wasm \
63
  --prompt-template gemma-instruct \
64
  --ctx-size 128000
65
+ ```
66
 
67
  ## Quantized GGUF Models
68