openfree commited on
Commit
5e0f863
·
verified ·
1 Parent(s): a93266c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - es
8
+ - pt
9
+ - it
10
+ - ja
11
+ - ko
12
+ - ru
13
+ - zh
14
+ - ar
15
+ - fa
16
+ - id
17
+ - ms
18
+ - ne
19
+ - pl
20
+ - ro
21
+ - sr
22
+ - sv
23
+ - tr
24
+ - uk
25
+ - vi
26
+ - hi
27
+ - bn
28
+ library_name: vllm
29
+ license: apache-2.0
30
+ pipeline_tag: image-text-to-text
31
+ tags:
32
+ - llama-cpp
33
+ - gguf-my-repo
34
+ inference: false
35
+ extra_gated_description: If you want to learn more about how we process your personal
36
+ data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
37
+ ---
38
+
39
+ # openfree/Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF
40
+ This model was converted to GGUF format from [`mistralai/Mistral-Small-3.1-24B-Instruct-2503`](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
41
+ Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503) for more details on the model.
42
+
43
+ ## Use with llama.cpp
44
+ Install llama.cpp through brew (works on Mac and Linux)
45
+
46
+ ```bash
47
+ brew install llama.cpp
48
+
49
+ ```
50
+ Invoke the llama.cpp server or the CLI.
51
+
52
+ ### CLI:
53
+ ```bash
54
+ llama-cli --hf-repo openfree/Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF --hf-file mistral-small-3.1-24b-instruct-2503-q4_k_m.gguf -p "The meaning to life and the universe is"
55
+ ```
56
+
57
+ ### Server:
58
+ ```bash
59
+ llama-server --hf-repo openfree/Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF --hf-file mistral-small-3.1-24b-instruct-2503-q4_k_m.gguf -c 2048
60
+ ```
61
+
62
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
63
+
64
+ Step 1: Clone llama.cpp from GitHub.
65
+ ```
66
+ git clone https://github.com/ggerganov/llama.cpp
67
+ ```
68
+
69
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
70
+ ```
71
+ cd llama.cpp && LLAMA_CURL=1 make
72
+ ```
73
+
74
+ Step 3: Run inference through the main binary.
75
+ ```
76
+ ./llama-cli --hf-repo openfree/Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF --hf-file mistral-small-3.1-24b-instruct-2503-q4_k_m.gguf -p "The meaning to life and the universe is"
77
+ ```
78
+ or
79
+ ```
80
+ ./llama-server --hf-repo openfree/Mistral-Small-3.1-24B-Instruct-2503-Q4_K_M-GGUF --hf-file mistral-small-3.1-24b-instruct-2503-q4_k_m.gguf -c 2048
81
+ ```