Model Description

The Mistral-Nemo-Instruct-Uz model has been continually pre-trained and instruction-tuned using a mix of publicly available and syntheticly constructed Uzbek and English data to preserve its original knowledge while enhancing its capabilities. This model is designed to support various natural language processing tasks in Uzbek, such as machine translation, summarization, and dialogue systems, ensuring robust performance across these applications. For details regarding the performance metrics compared to the base model, see this post.

📊 Performance Comparison:

Model Name BLEU Uz-En (One-shot) BLEU En-Uz (One-shot) COMET (Uz-En) COMET (Ez-Un) Uzbek Sentiment Analysis Uzbek News Classification MMLU (English) (5-shot)
Llama-3.1 8B Instruct 23.74 6.72 84.30 82.70 68.96 55.41 65.77
Llama-3.1 8B Instruct Uz 27.42 11.58 85.63 86.53 82.42 60.84 62.78
Mistral 7B Instruct 7.47 0.67 68.14 45.58 62.02 47.52 61.07
Mistral 7B Instruct Uz 29.39 16.77 86.91 88.75 79.13 59.38 55.72
Mistral Nemo Instruct 25.68 9.79 85.56 85.04 72.47 49.24 67.62
Mistral Nemo Instruct Uz 30.49 15.52 87.04 88.01 82.05 58.2 67.36
Google Translate 41.18 22.98 89.16 90.67 — — —

The results show that Uzbek-optimized models consistently outperform their base counterparts in translation benchmarks (BLEU and COMET) on the FLORES+ Uz-En / En-Uz evaluation datasets, sentiment analysis and news classification in Uzbek language. Also, on the MMLU benchmark, which measures general language understanding across multiple tasks in English, the finetuned models did not show significant decline. (The base Llama model’s MMLU score differs from the official score due to our evaluation method. Refer to the links below to see evaluation details.)

Looking ahead, these models are just early versions. We are actively working on further improving our data curation and fine-tuning method to provide even better results in the near future. In addition, we will scale up the dataset size both for continual-pretraining and instruction-tuning, and also customize other strong open-source LLMs for Uzbek language. We’re eager to see how these models will be used by our Uzbek 🇺🇿 community and look forward to continuing this work. 🚀

Usage

The model can be used with frameworks:

Mistral Inference

Install

It is recommended to use behbudiy/Mistral-Nemo-Instruct-Uz with mistral-inference. For HF transformers code snippets, please keep scrolling.

pip install mistral_inference

Download

from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct-Uz')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="behbudiy/Mistral-Nemo-Instruct-Uz", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)

Chat

After installing mistral_inference, a mistral-chat CLI command should be available in your environment. You can chat with the model using

mistral-chat $HOME/mistral_models/Nemo-Instruct-Uz --instruct --max_tokens 256 --temperature 0.35

E.g. Try out something like:

O'zbek tilida kichik bir she'r yozib bera olasanmi?

Instruction Following

from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest

tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)

prompt = "O'zbek tilida kichik bir she'r yozib bera olasanmi?"

completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.35, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])

print(result)

Information on Evaluation Method

To evaluate on the translation task, we used FLORES+ Uz-En / En-Uz datasets, where we merged the dev and test sets to create a bigger evaluation data for each Uz-En and En-Uz subsets. We used the following prompt to do one-shot Uz-En evaluation both for the base model and Uzbek-optimized model (for En-Uz eval, we changed the positions of the words "English" and "Uzbek").

  prompt = f'''You are a professional Uzbek-English translator. Your task is to accurately translate the given Uzbek text into English.

  Instructions:
  1. Translate the text from Uzbek to English.
  2. Maintain the original meaning and tone.
  3. Use appropriate English grammar and vocabulary.
  4. If you encounter an ambiguous or unfamiliar word, provide the most likely translation based on context.
  5. Output only the English translation, without any additional comments.

  Example:
  Uzbek: "Bugun ob-havo juda yaxshi, quyosh charaqlab turibdi."
  English: "The weather is very nice today, the sun is shining brightly."

  Now, please translate the following Uzbek text into English:
  "{sentence}"
    '''

To assess the model's ability in Uzbek sentiment analysis, we used the risqaliyevds/uzbek-sentiment-analysis dataset, for which we created binary labels (0: Negative, 1: Positive) using GPT-4o API (refer to behbudiy/uzbek-sentiment-analysis dataset). We used the following prompt for the evaluation:

prompt = f'''Given the following text, determine the sentiment as either 'Positive' or 'Negative.' Respond with only the word 'Positive' or 'Negative' without any additional text or explanation.

Text: {text}"
'''

For Uzbek News Classification, we used risqaliyevds/uzbek-zero-shot-classification dataset and asked the model to predict the category of the news using the following prompt:

prompt = f'''Classify the given Uzbek news article into one of the following categories. Provide only the category number as the answer.

Categories:
0 - Politics (Siyosat)
1 - Economy (Iqtisodiyot)
2 - Technology (Texnologiya)
3 - Sports (Sport)
4 - Culture (Madaniyat)
5 - Health (Salomatlik)
6 - Family and Society (Oila va Jamiyat)
7 - Education (Ta'lim)
8 - Ecology (Ekologiya)
9 - Foreign News (Xorijiy Yangiliklar)

Now classify this article:
"{text}"

Answer (number only):"
'''

MMLU

We used this script.

More

For more details and examples, refer to the base model below: https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for behbudiy/Mistral-Nemo-Instruct-Uz

Finetuned
(40)
this model

Datasets used to train behbudiy/Mistral-Nemo-Instruct-Uz

Collection including behbudiy/Mistral-Nemo-Instruct-Uz