Adding links to repo
Browse files
README.md
CHANGED
@@ -40,8 +40,8 @@ Mistral-7b-02 base model was fine-tuned using the [RealWorldQA dataset](https://
|
|
40 |
* This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
|
41 |
|
42 |
<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
|
43 |
-
1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/
|
44 |
-
2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/
|
45 |
|
46 |
Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
|
47 |
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
|
|
|
40 |
* This model lacks the conversational prose of Excalibur-7b-DPO and is much "drier" in tone
|
41 |
|
42 |
<b>Requires additional mmproj file. You have two options for vision functionality (available inside this repo):</b>
|
43 |
+
1. [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
|
44 |
+
2. [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Mistral-RealworldQA-v0.2-7b-SFT/resolve/main/mmproj-model-f16.gguf?download=true)
|
45 |
|
46 |
Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
|
47 |
<img src="https://i.imgur.com/x8vqH29.png" width="425"/>
|