Triangle104 commited on
Commit
de438d9
·
verified ·
1 Parent(s): 4b93ba1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md CHANGED
@@ -16,6 +16,34 @@ base_model: DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B
16
  This model was converted to GGUF format from [`DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B`](https://huggingface.co/DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B) for more details on the model.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Use with llama.cpp
20
  Install llama.cpp through brew (works on Mac and Linux)
21
 
 
16
  This model was converted to GGUF format from [`DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B`](https://huggingface.co/DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-128k-Dark-Planet-Uncensored-8B) for more details on the model.
18
 
19
+ ---
20
+ It is a LLama 3.1 model, max context of 128k, with additional de-censoring as well as additional steps to improve generation
21
+ and re-mastered source and ggufs in float 32 (32 bit precision).
22
+
23
+
24
+ This model has been designed to be relatively bullet proof and
25
+ operates with all parameters, including temp settings from 0 to 5.
26
+
27
+
28
+ It is an extraordinary compressed model, with a very low perplexity level (lower than Meta Llama3 Instruct).
29
+
30
+
31
+ It is for any writing, fiction or roleplay activity.
32
+
33
+
34
+ It requires Llama 3 template and/or "Command-R" template.
35
+
36
+
37
+ Suggest a context window of at least 8k, 16K is better... as this model will generate long outputs unless you set a hard limit.
38
+
39
+
40
+ Likewise, as this is an instruct model - the more instructions in
41
+ your prompt and/or system prompt - the greater the output quality.
42
+
43
+
44
+ IE: Less "guessing" equals far higher quality.
45
+
46
+ ---
47
  ## Use with llama.cpp
48
  Install llama.cpp through brew (works on Mac and Linux)
49