Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ tags:
|
|
14 |
|
15 |
# Foundation-Sec-8B-Q4_K_M-GGUF Model Card
|
16 |
|
17 |
-
**This model was quantized from [fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) to an 4-bit (Q4_K_M) GGUF checkpoint using llama.cpp. It retains the cybersecurity specialization of the original 8-billion-parameter model while reducing the memory footprint from approximately 16GB (BF16) to around
|
18 |
|
19 |
## Model Description
|
20 |
|
|
|
14 |
|
15 |
# Foundation-Sec-8B-Q4_K_M-GGUF Model Card
|
16 |
|
17 |
+
**This model was quantized from [fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) to an 4-bit (Q4_K_M) GGUF checkpoint using llama.cpp. It retains the cybersecurity specialization of the original 8-billion-parameter model while reducing the memory footprint from approximately 16GB (BF16) to around 4.92GB (Q4_K_M) for inference.**
|
18 |
|
19 |
## Model Description
|
20 |
|