Model Card for xTower13B

Model Details

xTower13B is a language model that results from fine-tuning TowerBase for explaining and correcting translation errors.

xTower was finetuned on a dataset that includes explanations generated from GPT-4 (prompted with and without references), along with machine translation data from TowerBlocks. We combined all available data to train a single, multilingual model, employing a mixed prompt setting~(zero-shot, few-shot) during training. As a result, xTower can handle both referenceless and reference-based k-shot prompts.

Our training hyperparameters and configuration follows that used to train TowerInstruct.

  • Developed by: Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
  • Model type: A 13B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
  • Language(s) (NLP): English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
  • License: CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
  • Finetuned from model: TowerBase

Intended uses & limitations

The intented use of the model is two-fold:

  • Explain translation errors in natural language
  • Correct translation errors by providing a translation suggestion

The model was trained on explanations from en-de, en-ru, and zh-en language pairs. Despite its multilinguality roots, xTower may or may not perform well on unseen language pairs.

Usage

For a minimal example, check out this notebook, which prompts xTower with VLLM.

Additionally, here's how you can run the model using the pipeline() function from 🤗 Transformers:

# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="sardinelab/xTower13B", device_map="auto")
messages = [
    {
      "role": "user",
      "content": "your_prompt"
    },
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=1024, do_sample=False)

Bias, Risks, and Limitations

xTower has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).

Prompt Format

xTower was trained using the ChatML prompt templates without any system prompts. An example follows below:

<|im_start|>user
You are provided with a Source, Translation, Translation quality analysis, and Translation quality score (weak, moderate, good, excellent, best). The Translation quality analysis contain a translation with marked error spans with different levels of severity (minor or major). Additionally, we may provide a **reference translation**. Given this information, generate an explanation for each error and a fully correct translation.

English source: All were wearing avalanche beacons.
German translation: Alle trugen Lawinenschilder.
Translation quality analysis: Alle trugen <error1 severity=“major”>Lawinenschilder</error1>.
Translation quality score: weak
<|im_end|>
<|im_start|>assistant

After that, the model should generate something like this:

Explanation for error1: The term “Lawinenschilder” translates to “avalanche signs”, which is a significant mistranslation of “avalanche beacons”. Avalanche beacons are specific devices used to locate people trapped under snow after an avalanche, not signs. This error changes the meaning of the sentence and introduces confusion about the context.
Translation correction: Alle trugen Lawinensuchgeräte.
<|im_end|>

That is, the output consists of a list of explanations for each error, and a final translation correciton.

Citation

@article{treviso2024xtower,
  title={xTower: A Multilingual LLM for Explaining and Correcting Translation Errors},
  author={Treviso, Marcos and Guerreiro, Nuno M and Agrawal, Sweta and Rei, Ricardo and Pombal, Jos{\'e} and Vaz, Tania and Wu, Helena and Silva, Beatriz and van Stigt, Daan and Martins, Andr{\'e} FT},
  journal={arXiv preprint arXiv:2406.19482},
  year={2024}
}
Downloads last month
222
Safetensors
Model size
13B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sardinelab/xTower13B

Quantizations
3 models

Dataset used to train sardinelab/xTower13B