Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -9,76 +9,49 @@ license: apache-2.0
|
|
9 |
pipeline_tag: text-generation
|
10 |
tags:
|
11 |
- llama-cpp
|
12 |
-
-
|
13 |
-
---
|
14 |
-
|
15 |
-
- **Base model:** [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct)
|
16 |
-
- **License:** [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
17 |
-
|
18 |
-
Quantized with llama.cpp using [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where)
|
19 |
-
|
20 |
-
## β
Quantized Models Download List
|
21 |
-
|
22 |
-
### π Recommended Quantizations
|
23 |
-
- **β¨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_k_m.gguf) (Best balance of speed/quality)
|
24 |
-
- **π± ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_0.gguf) (Optimized for ARM CPUs)
|
25 |
-
- **π Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q8_0.gguf) (Near-original quality)
|
26 |
-
|
27 |
-
### π¦ Full Quantization Options
|
28 |
-
| π Download | π’ Type | π Notes |
|
29 |
-
|:---------|:-----|:------|
|
30 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q2_k.gguf) |  | Basic quantization |
|
31 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q3_k_s.gguf) |  | Small size |
|
32 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q3_k_m.gguf) |  | Balanced quality |
|
33 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q3_k_l.gguf) |  | Better quality |
|
34 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_0.gguf) |  | Fast on ARM |
|
35 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_k_s.gguf) |  | Fast, recommended |
|
36 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_k_m.gguf) |  β | Best balance |
|
37 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q5_0.gguf) |  | Good quality |
|
38 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q5_k_s.gguf) |  | Balanced |
|
39 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q5_k_m.gguf) |  | High quality |
|
40 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q6_k.gguf) |  π | Very good quality |
|
41 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q8_0.gguf) |  β‘ | Fast, best quality |
|
42 |
-
| [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-f16.gguf) |  | Maximum accuracy |
|
43 |
-
|
44 |
-
π‘ **Tip:** Use `F16` for maximum precision when quality is critical
|
45 |
-
|
46 |
-
|
47 |
-
---
|
48 |
-
# π Applications and Tools for Locally Quantized LLMs
|
49 |
-
## π₯οΈ Desktop Applications
|
50 |
-
|
51 |
-
| Application | Description | Download Link |
|
52 |
-
|-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
53 |
-
| **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) |
|
54 |
-
| **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) |
|
55 |
-
| **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) |
|
56 |
-
| **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) |
|
57 |
-
| **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
|
58 |
-
| **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) |
|
59 |
-
| **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
|
60 |
-
|
61 |
-
---
|
62 |
-
|
63 |
-
## π± Mobile Applications
|
64 |
-
|
65 |
-
| Application | Description | Download Link |
|
66 |
-
|-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
67 |
-
| **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) |
|
68 |
-
| **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) |
|
69 |
-
| **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) |
|
70 |
-
| **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) |
|
71 |
-
|
72 |
-
---
|
73 |
-
|
74 |
-
## π¨ Image Generation Applications
|
75 |
-
|
76 |
-
| Application | Description | Download Link |
|
77 |
-
|-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
|
78 |
-
| **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) |
|
79 |
-
| **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) |
|
80 |
-
| **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) |
|
81 |
-
| **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) |
|
82 |
-
|
83 |
---
|
84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
tags:
|
11 |
- llama-cpp
|
12 |
+
- gguf-my-repo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
+
# ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF
|
16 |
+
This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B-Instruct`](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
17 |
+
Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) for more details on the model.
|
18 |
+
|
19 |
+
## Use with llama.cpp
|
20 |
+
Install llama.cpp through brew (works on Mac and Linux)
|
21 |
+
|
22 |
+
```bash
|
23 |
+
brew install llama.cpp
|
24 |
+
|
25 |
+
```
|
26 |
+
Invoke the llama.cpp server or the CLI.
|
27 |
+
|
28 |
+
### CLI:
|
29 |
+
```bash
|
30 |
+
llama-cli --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -p "The meaning to life and the universe is"
|
31 |
+
```
|
32 |
+
|
33 |
+
### Server:
|
34 |
+
```bash
|
35 |
+
llama-server --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -c 2048
|
36 |
+
```
|
37 |
+
|
38 |
+
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
39 |
+
|
40 |
+
Step 1: Clone llama.cpp from GitHub.
|
41 |
+
```
|
42 |
+
git clone https://github.com/ggerganov/llama.cpp
|
43 |
+
```
|
44 |
+
|
45 |
+
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
46 |
+
```
|
47 |
+
cd llama.cpp && LLAMA_CURL=1 make
|
48 |
+
```
|
49 |
+
|
50 |
+
Step 3: Run inference through the main binary.
|
51 |
+
```
|
52 |
+
./llama-cli --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -p "The meaning to life and the universe is"
|
53 |
+
```
|
54 |
+
or
|
55 |
+
```
|
56 |
+
./llama-server --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -c 2048
|
57 |
+
```
|