Update README.md
Browse files
README.md
CHANGED
@@ -134,10 +134,10 @@ pip install llama-cpp-python huggingface_hub
|
|
134 |
### Key Parameters
|
135 |
- `n_ctx`: Context window size (default: 2048)
|
136 |
- `n_threads`: Number of CPU threads to use (adjust based on your hardware)
|
137 |
-
- `temperature`: Controls randomness
|
138 |
-
- `top_p`: Nucleus sampling parameter
|
139 |
-
- `top_k`: Limits vocabulary choices
|
140 |
-
- `repeat_penalty`: Prevents repetition
|
141 |
- `max_tokens`: Maximum length of response (128 default, increase for longer answers)
|
142 |
|
143 |
### Example Usage
|
|
|
134 |
### Key Parameters
|
135 |
- `n_ctx`: Context window size (default: 2048)
|
136 |
- `n_threads`: Number of CPU threads to use (adjust based on your hardware)
|
137 |
+
- `temperature`: Controls randomness
|
138 |
+
- `top_p`: Nucleus sampling parameter
|
139 |
+
- `top_k`: Limits vocabulary choices
|
140 |
+
- `repeat_penalty`: Prevents repetition
|
141 |
- `max_tokens`: Maximum length of response (128 default, increase for longer answers)
|
142 |
|
143 |
### Example Usage
|