Darkhn commited on
Commit
f527299
Β·
verified Β·
1 Parent(s): 8977d84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -51
README.md CHANGED
@@ -1,51 +1,53 @@
1
- ---
2
- base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
3
- tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - mistral
8
- - trl
9
- license: apache-2.0
10
- language:
11
- - en
12
- ---
13
-
14
- ![Eurydice 24b Banner](https://cdn-uploads.huggingface.co/production/uploads/652c2a63d78452c4742cd3d3/Hm_tg4s0D6yWmtrTHII32.png)
15
-
16
- # Eurydice 24b v2 πŸ§™β€β™‚οΈ
17
-
18
-
19
- Eurydice 24b v2 is designed to be the perfect companion for multi-role conversations. It demonstrates exceptional contextual understanding and excels in creativity, natural conversation and storytelling. Built on Mistral 3.1, this model has been trained on a custom dataset specifically crafted to enhance its capabilities.
20
-
21
- ## Model Details πŸ“Š
22
-
23
- - **Developed by:** Aixon Lab
24
- - **Model type:** Causal Language Model
25
- - **Language(s):** English (primarily), may support other languages
26
- - **License:** Apache 2.0
27
- - **Repository:** https://huggingface.co/aixonlab/Eurydice-24b-v2
28
-
29
- ## Quantization
30
- - **GGUF:** Coming Soon !
31
-
32
- ## Model Architecture πŸ—οΈ
33
-
34
- - **Base model:** mistralai/Mistral-Small-3.1-24B-Instruct-2503
35
- - **Parameter count:** ~24 billion
36
- - **Architecture specifics:** Transformer-based language model
37
-
38
- ## Intended Use 🎯
39
- As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis.
40
-
41
- ## Ethical Considerations πŸ€”
42
- As a model based on multiple sources, Eurydice 24b v2 may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly.
43
-
44
- ## Performance and Evaluation
45
- Performance metrics and evaluation results for Eurydice 24b v2 are yet to be determined. Users are encouraged to contribute their findings and benchmarks.
46
-
47
- ## Limitations and Biases
48
- The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment.
49
-
50
- ## Additional Information
51
- For more details on the base model and constituent models, please refer to their respective model cards and documentation.
 
 
 
1
+ ---
2
+ base_model_relation: quantized
3
+ base_model:
4
+ - aixonlab/Eurydice-24b-v2
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - unsloth
9
+ - mistral
10
+ - trl
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
+ ---
15
+
16
+ ![Eurydice 24b Banner](https://cdn-uploads.huggingface.co/production/uploads/652c2a63d78452c4742cd3d3/Hm_tg4s0D6yWmtrTHII32.png)
17
+
18
+ # Eurydice 24b v2 πŸ§™β€β™‚οΈ
19
+
20
+
21
+ Eurydice 24b v2 is designed to be the perfect companion for multi-role conversations. It demonstrates exceptional contextual understanding and excels in creativity, natural conversation and storytelling. Built on Mistral 3.1, this model has been trained on a custom dataset specifically crafted to enhance its capabilities.
22
+
23
+ ## Model Details πŸ“Š
24
+
25
+ - **Developed by:** Aixon Lab
26
+ - **Model type:** Causal Language Model
27
+ - **Language(s):** English (primarily), may support other languages
28
+ - **License:** Apache 2.0
29
+ - **Repository:** https://huggingface.co/aixonlab/Eurydice-24b-v2
30
+
31
+ ## Quantization
32
+ - **GGUF:** Coming Soon !
33
+
34
+ ## Model Architecture πŸ—οΈ
35
+
36
+ - **Base model:** mistralai/Mistral-Small-3.1-24B-Instruct-2503
37
+ - **Parameter count:** ~24 billion
38
+ - **Architecture specifics:** Transformer-based language model
39
+
40
+ ## Intended Use 🎯
41
+ As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis.
42
+
43
+ ## Ethical Considerations πŸ€”
44
+ As a model based on multiple sources, Eurydice 24b v2 may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly.
45
+
46
+ ## Performance and Evaluation
47
+ Performance metrics and evaluation results for Eurydice 24b v2 are yet to be determined. Users are encouraged to contribute their findings and benchmarks.
48
+
49
+ ## Limitations and Biases
50
+ The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment.
51
+
52
+ ## Additional Information
53
+ For more details on the base model and constituent models, please refer to their respective model cards and documentation.