Text Generation
Transformers
GGUF
Japanese
llama
japanese-stablelm
causal-lm
TheBloke theamdara commited on
Commit
3228f38
1 Parent(s): 6475712

Update README.md (#1)

Browse files

- Update README.md (19e779912e20f7a61b7d08a1bc3ff27df674e338)


Co-authored-by: theamdara <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -56,7 +56,7 @@ These files were quantised using hardware kindly provided by [Massed Compute](ht
56
 
57
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
58
 
59
- Here is an incomplate list of clients and libraries that are known to support GGUF:
60
 
61
  * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
62
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
 
56
 
57
  GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
58
 
59
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
60
 
61
  * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
62
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.