Tower-Plus-72B / README.md
RicardoRei's picture
Update README.md
4afb040 verified
|
raw
history blame
3.6 kB
---
base_model: Qwen/Qwen2.5-72B
license: cc-by-nc-sa-4.0
language:
- de
- nl
- is
- es
- fr
- pt
- uk
- hi
- zh
- ru
- cs
- ko
- ja
- it
- en
- da
- pl
- hu
- sv
- 'no'
- ro
- fi
library_name: transformers
---
# Model Description:
**Tower+ 72B** is build on top of Qwen 2.5 72B. The model goes through the Continuous Pretraining (CPT), Instruction Tuning (IT) and Weighted Preference Optimization (WPO). During all these stages we include parallel and multilingual data (covering 22 languages).
- **Developed by:** Unbabel
- **Model type:** A 72B parameter model fine-tuned on a mix of _translation-related tasks_ as well as _general instruction-following_ datasets that include reasoning, code instructions, etc.
- **Languages:** German, Spanish, French, Italian, Korean, Dutch, Russian, English, Portuguese (Portugal), Portuguese (Brazilian), Spanish (Latin America), Chinese (Simplified), Chinese (Traditional), Czech, Ukrainian, Hindi, Icelandic, Japanese, Polish, Swedish, Hungarian, Romanian, Danish, Norwegian (Nynorsk), Norwegian (Bokmål), Finnish
- **License:** CC-BY-NC-4.0
- **Context Size:**: 131,072 tokens (recommended generation tokens 8192)
# Intended uses & limitations
Tower is intended for multilingual tasks and its specially strong on translation related tasks.
Another usecase Tower works well is for creating multilingual synthethic data (for the languages it covers). You can do this either by translating instructions and the respective answers or by asking the model to create an instruction given a document as seed data.
# Usage:
When using the model, make sure your prompt is formated correctly!
Also, we recommend using VLLM rather than Hugging Face.
## Prompt Format:
```python
chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
with_system_prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nTranslate: Hello, world! into Portuguese.<|im_end|>\n"
# System prompts are optional.
without_system_prompt = "<|im_start|>user\nTranslate: Hello, world! into Portuguese.<|im_end|>\n"
```
### Using on VLLM:
```python
# pip install vllm
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(
best_of=1,
temperature=0,
max_tokens=8192,
)
llm = LLM(model="Unbabel/Tower-Plus-72B", tensor_parallel_size=4)
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal):\nEnglish: Hello world!\nPortuguese (Portugal): "}]
outputs = llm.chat(messages, sampling_params)
# Make sure your prompt_token_ids look like this
print (outputs[0].outputs[0].text)
# > Olá, mundo!
```
### Using on Transformers:
```python
# pip install transformers
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="Unbabel/Tower-Plus-72B", device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [{"role": "user", "content": "Translate the following English source text to Portuguese (Portugal):\nEnglish: Hello world!\nPortuguese (Portugal): "}]
input_ids = pipe.tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True)
outputs = pipe(messages, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])
```