jmprcp commited on
Commit
2ee8a04
·
verified ·
1 Parent(s): a6a343c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: Unbabel/Tower-Plus-72B
3
  license: cc-by-nc-sa-4.0
4
  language:
5
  - de
@@ -29,7 +29,7 @@ library_name: transformers
29
 
30
  # Model Description:
31
 
32
- **Tower + 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
33
 
34
  - **Developed by:** Unbabel
35
  - **Model type:** A 72B parameter model fine-tuned on a mix of _translation-related tasks_ as well as _general instruction-following_ datasets that include reasoning, code instructions, etc.
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-72B
3
  license: cc-by-nc-sa-4.0
4
  language:
5
  - de
 
29
 
30
  # Model Description:
31
 
32
+ **Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
33
 
34
  - **Developed by:** Unbabel
35
  - **Model type:** A 72B parameter model fine-tuned on a mix of _translation-related tasks_ as well as _general instruction-following_ datasets that include reasoning, code instructions, etc.