Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ inference: true
|
|
21 |
library_name: transformers
|
22 |
---
|
23 |
# 🧠 Dolphin-Mistral-24B-Venice-Edition - Fine-tuned by Daemontatox 🐬
|
24 |
-
, an instruction-tuned large language model based on the Mistral 24B architecture. The fine-tuning was conducted by **Daemontatox**, leveraging the [Unsloth](https://github.com/unslothai/unsloth) framework for accelerated training and memory efficiency.
|
|
|
21 |
library_name: transformers
|
22 |
---
|
23 |
# 🧠 Dolphin-Mistral-24B-Venice-Edition - Fine-tuned by Daemontatox 🐬
|
24 |
+

|
25 |
## 📌 Overview
|
26 |
|
27 |
This model is a fine-tuned version of [cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition), an instruction-tuned large language model based on the Mistral 24B architecture. The fine-tuning was conducted by **Daemontatox**, leveraging the [Unsloth](https://github.com/unslothai/unsloth) framework for accelerated training and memory efficiency.
|