lbourdois commited on
Commit
f35a45d
·
verified ·
1 Parent(s): acc9243

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +40 -26
README.md CHANGED
@@ -1,26 +1,40 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-1.5B-Instruct
3
- tags:
4
- - peft
5
- - lora
6
- - federated-learning
7
- - flower
8
- datasets:
9
- - vicgalle/alpaca-gpt4
10
- ---
11
-
12
- # FlowerTune LoRA Model
13
-
14
- This is a LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
15
-
16
- ## Training Details
17
-
18
- - Dataset: vicgalle/alpaca-gpt4
19
- - Training method: Federated LoRA fine-tuning with FlowerTune
20
- - Framework: Flower
21
-
22
- This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-1.5B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
23
-
24
- ## Links
25
- - FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
26
- - FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
+ tags:
4
+ - peft
5
+ - lora
6
+ - federated-learning
7
+ - flower
8
+ datasets:
9
+ - vicgalle/alpaca-gpt4
10
+ language:
11
+ - zho
12
+ - eng
13
+ - fra
14
+ - spa
15
+ - por
16
+ - deu
17
+ - ita
18
+ - rus
19
+ - jpn
20
+ - kor
21
+ - vie
22
+ - tha
23
+ - ara
24
+ ---
25
+
26
+ # FlowerTune LoRA Model
27
+
28
+ This is a LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.
29
+
30
+ ## Training Details
31
+
32
+ - Dataset: vicgalle/alpaca-gpt4
33
+ - Training method: Federated LoRA fine-tuning with FlowerTune
34
+ - Framework: Flower
35
+
36
+ This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-1.5B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
37
+
38
+ ## Links
39
+ - FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
40
+ - FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)