lbourdois commited on
Commit
2ce8739
·
verified ·
1 Parent(s): 483acbe

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +49 -37
README.md CHANGED
@@ -1,38 +1,50 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-14B-Instruct
3
- tags:
4
- - fluently-lm
5
- - fluently-sets
6
- - demo
7
- - reasoning
8
- - text-generation-inference
9
- - transformers
10
- - unsloth
11
- - qwen2
12
- - trl
13
- - sft
14
- license: apache-2.0
15
- language:
16
- - en
17
- datasets:
18
- - fluently-sets/reasoning-1-1k
19
- pipeline_tag: text-generation
20
- ---
21
-
22
- # Reasoning-1 1K Demo (Finetune of Qwen2.5-14B-IT on Reasoning-1-1k dataset)
23
-
24
- ***Q4_K_M GGUF-quant available [here](https://huggingface.co/fluently-sets/reasoning-1-1k-demo-Q4_K_M-GGUF)***
25
-
26
- This is SFT-finetune Qwen2.5-14B-IT on Reasoning-1-1K dataset. This is far from a perfect model, its main purpose is to show an example of using the dataset.
27
-
28
- - **Base model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
29
- - **Model type**: [Qwen2ForCausalLM](https://huggingface.co/models?other=qwen2)
30
- - **Number of parameters**: 14.8B
31
- - **Precision**: FP16
32
- - **Training method**: SFT
33
- - **Training dataset**: [fluently-sets/reasoning-1-1k](https://huggingface.co/datasets/fluently-sets/reasoning-1-1k)
34
- - **Languages**: English (mostly)
35
-
36
- *Trained by Fluently Team ([@ehristoforu](https://huggingface.co/ehristoforu)) with [Unsloth AI](https://github.com/unslothai/unsloth) with love🥰*
37
-
 
 
 
 
 
 
 
 
 
 
 
 
38
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-14B-Instruct
3
+ tags:
4
+ - fluently-lm
5
+ - fluently-sets
6
+ - demo
7
+ - reasoning
8
+ - text-generation-inference
9
+ - transformers
10
+ - unsloth
11
+ - qwen2
12
+ - trl
13
+ - sft
14
+ license: apache-2.0
15
+ language:
16
+ - zho
17
+ - eng
18
+ - fra
19
+ - spa
20
+ - por
21
+ - deu
22
+ - ita
23
+ - rus
24
+ - jpn
25
+ - kor
26
+ - vie
27
+ - tha
28
+ - ara
29
+ datasets:
30
+ - fluently-sets/reasoning-1-1k
31
+ pipeline_tag: text-generation
32
+ ---
33
+
34
+ # Reasoning-1 1K Demo (Finetune of Qwen2.5-14B-IT on Reasoning-1-1k dataset)
35
+
36
+ ***Q4_K_M GGUF-quant available [here](https://huggingface.co/fluently-sets/reasoning-1-1k-demo-Q4_K_M-GGUF)***
37
+
38
+ This is SFT-finetune Qwen2.5-14B-IT on Reasoning-1-1K dataset. This is far from a perfect model, its main purpose is to show an example of using the dataset.
39
+
40
+ - **Base model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
41
+ - **Model type**: [Qwen2ForCausalLM](https://huggingface.co/models?other=qwen2)
42
+ - **Number of parameters**: 14.8B
43
+ - **Precision**: FP16
44
+ - **Training method**: SFT
45
+ - **Training dataset**: [fluently-sets/reasoning-1-1k](https://huggingface.co/datasets/fluently-sets/reasoning-1-1k)
46
+ - **Languages**: English (mostly)
47
+
48
+ *Trained by Fluently Team ([@ehristoforu](https://huggingface.co/ehristoforu)) with [Unsloth AI](https://github.com/unslothai/unsloth) with love🥰*
49
+
50
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)