mostafaamiri
commited on
Commit
•
6242856
1
Parent(s):
daf97fa
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ tags:
|
|
16 |
|
17 |
|
18 |
```
|
19 |
-
./main -t 10 -ngl 32 -m persian_llama_7b.
|
20 |
```
|
21 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
22 |
|
@@ -53,7 +53,7 @@ callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
|
|
53 |
|
54 |
# Make sure the model path is correct for your system!
|
55 |
llm = LlamaCpp(
|
56 |
-
model_path="./persian_llama_7b.
|
57 |
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
|
58 |
callback_manager=callback_manager,
|
59 |
verbose=True,
|
|
|
16 |
|
17 |
|
18 |
```
|
19 |
+
./main -t 10 -ngl 32 -m persian_llama_7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: یک شعر حماسی در مورد کوه دماوند بگو ### Input: ### Response:"
|
20 |
```
|
21 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
22 |
|
|
|
53 |
|
54 |
# Make sure the model path is correct for your system!
|
55 |
llm = LlamaCpp(
|
56 |
+
model_path="./persian_llama_7b.Q4_K_M.gguf",
|
57 |
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
|
58 |
callback_manager=callback_manager,
|
59 |
verbose=True,
|