Text Generation
Transformers
Safetensors
Japanese
qwen2
conversational
text-generation-inference
lbourdois commited on
Commit
c0cf78b
verified
1 Parent(s): f639177

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +41 -29
README.md CHANGED
@@ -1,30 +1,42 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- datasets:
5
- - Manual-Dataset-Creation-Project/Malum-230
6
- - llm-jp/oasst2-33k-ja
7
- language:
8
- - ja
9
- base_model:
10
- - Qwen/Qwen2.5-7B
11
- inference: false
12
- ---
13
-
14
- # Matsu-7B
15
-
16
- ## Description
17
- Matsu-7B is a model that was instruction-tuned on the oasst2 and Malum-230, using Qwen2.5-7B as its base model.
18
-
19
- ## Series
20
- | Variant | Link |
21
- | --- | --- |
22
- | Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
23
- | Take-7B | [Manual-Dataset-Creation-Project/Take-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Take-7B) |
24
-
25
- ## Contributors
26
- - [Sudy](https://huggingface.co/sudy-super)
27
- - [銇汇兗銈娿兗銇点亯銇c亸銇橾(https://huggingface.co/Holy-fox)
28
-
29
- ## Acknowledgments
 
 
 
 
 
 
 
 
 
 
 
 
30
  We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model.
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - Manual-Dataset-Creation-Project/Malum-230
6
+ - llm-jp/oasst2-33k-ja
7
+ language:
8
+ - zho
9
+ - eng
10
+ - fra
11
+ - spa
12
+ - por
13
+ - deu
14
+ - ita
15
+ - rus
16
+ - jpn
17
+ - kor
18
+ - vie
19
+ - tha
20
+ - ara
21
+ base_model:
22
+ - Qwen/Qwen2.5-7B
23
+ inference: false
24
+ ---
25
+
26
+ # Matsu-7B
27
+
28
+ ## Description
29
+ Matsu-7B is a model that was instruction-tuned on the oasst2 and Malum-230, using Qwen2.5-7B as its base model.
30
+
31
+ ## Series
32
+ | Variant | Link |
33
+ | --- | --- |
34
+ | Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
35
+ | Take-7B | [Manual-Dataset-Creation-Project/Take-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Take-7B) |
36
+
37
+ ## Contributors
38
+ - [Sudy](https://huggingface.co/sudy-super)
39
+ - [銇汇兗銈娿兗銇点亯銇c亸銇橾(https://huggingface.co/Holy-fox)
40
+
41
+ ## Acknowledgments
42
  We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model.