ysn-rfd commited on
Commit
1d64246
Β·
verified Β·
1 Parent(s): c5095ac

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -71
README.md CHANGED
@@ -9,76 +9,49 @@ license: apache-2.0
9
  pipeline_tag: text-generation
10
  tags:
11
  - llama-cpp
12
- - matrixportal
13
- ---
14
-
15
- - **Base model:** [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct)
16
- - **License:** [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
17
-
18
- Quantized with llama.cpp using [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where)
19
-
20
- ## βœ… Quantized Models Download List
21
-
22
- ### πŸ” Recommended Quantizations
23
- - **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_k_m.gguf) (Best balance of speed/quality)
24
- - **πŸ“± ARM Devices:** [`Q4_0`](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_0.gguf) (Optimized for ARM CPUs)
25
- - **πŸ† Maximum Quality:** [`Q8_0`](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q8_0.gguf) (Near-original quality)
26
-
27
- ### πŸ“¦ Full Quantization Options
28
- | πŸš€ Download | πŸ”’ Type | πŸ“ Notes |
29
- |:---------|:-----|:------|
30
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization |
31
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size |
32
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality |
33
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality |
34
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM |
35
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended |
36
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance |
37
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality |
38
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced |
39
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality |
40
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) πŸ† | Very good quality |
41
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚑ | Fast, best quality |
42
- | [Download](https://huggingface.co/ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF/resolve/main/olmo-2-1124-7b-instruct-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy |
43
-
44
- πŸ’‘ **Tip:** Use `F16` for maximum precision when quality is critical
45
-
46
-
47
- ---
48
- # πŸš€ Applications and Tools for Locally Quantized LLMs
49
- ## πŸ–₯️ Desktop Applications
50
-
51
- | Application | Description | Download Link |
52
- |-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
53
- | **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) |
54
- | **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) |
55
- | **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) |
56
- | **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) |
57
- | **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
58
- | **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) |
59
- | **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) |
60
-
61
- ---
62
-
63
- ## πŸ“± Mobile Applications
64
-
65
- | Application | Description | Download Link |
66
- |-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
67
- | **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) |
68
- | **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) |
69
- | **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) |
70
- | **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) |
71
-
72
- ---
73
-
74
- ## 🎨 Image Generation Applications
75
-
76
- | Application | Description | Download Link |
77
- |-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
78
- | **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) |
79
- | **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) |
80
- | **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) |
81
- | **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) |
82
-
83
  ---
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  pipeline_tag: text-generation
10
  tags:
11
  - llama-cpp
12
+ - gguf-my-repo
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
+ # ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF
16
+ This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B-Instruct`](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
+ Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct) for more details on the model.
18
+
19
+ ## Use with llama.cpp
20
+ Install llama.cpp through brew (works on Mac and Linux)
21
+
22
+ ```bash
23
+ brew install llama.cpp
24
+
25
+ ```
26
+ Invoke the llama.cpp server or the CLI.
27
+
28
+ ### CLI:
29
+ ```bash
30
+ llama-cli --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -p "The meaning to life and the universe is"
31
+ ```
32
+
33
+ ### Server:
34
+ ```bash
35
+ llama-server --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -c 2048
36
+ ```
37
+
38
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
39
+
40
+ Step 1: Clone llama.cpp from GitHub.
41
+ ```
42
+ git clone https://github.com/ggerganov/llama.cpp
43
+ ```
44
+
45
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
46
+ ```
47
+ cd llama.cpp && LLAMA_CURL=1 make
48
+ ```
49
+
50
+ Step 3: Run inference through the main binary.
51
+ ```
52
+ ./llama-cli --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -p "The meaning to life and the universe is"
53
+ ```
54
+ or
55
+ ```
56
+ ./llama-server --hf-repo ysn-rfd/OLMo-2-1124-7B-Instruct-GGUF --hf-file olmo-2-1124-7b-instruct-q5_0.gguf -c 2048
57
+ ```