Triangle104 commited on
Commit
fe24d62
·
verified ·
1 Parent(s): 8c9cd9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -54,6 +54,14 @@ license: apache-2.0
54
  This model was converted to GGUF format from [`ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3`](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
55
  Refer to the [original model card](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) for more details on the model.
56
 
 
 
 
 
 
 
 
 
57
  ## Use with llama.cpp
58
  Install llama.cpp through brew (works on Mac and Linux)
59
 
 
54
  This model was converted to GGUF format from [`ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3`](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
55
  Refer to the [original model card](https://huggingface.co/ValiantLabs/DeepSeek-R1-0528-Qwen3-8B-Esper3) for more details on the model.
56
 
57
+ ---
58
+ Esper 3 is a coding, architecture, and DevOps reasoning specialist built on Qwen 3.
59
+
60
+ - Finetuned on our DevOps and architecture reasoning and code reasoning data generated with Deepseek R1!
61
+ - Improved general and creative reasoning to supplement problem-solving and general chat performance.
62
+ - Small model sizes allow running on local desktop and mobile, plus super-fast server inference!
63
+
64
+ ---
65
  ## Use with llama.cpp
66
  Install llama.cpp through brew (works on Mac and Linux)
67