Trained for 2 epochs on NilanE/ParallelFiction-Ja_En-100k using QLoRA. CPO tune is in-progress.

Input should be 500-1000 tokens long. Make sure to set 'do_sample = False' if using HF transformers for inference, or otherwise set temperature to 0 for deterministic outputs.

Prompt format:

Translate this from Japanese to English:
### JAPANESE:
{source_text}
### ENGLISH:

Footnote:

This is an independantly-developed project. If anyone is interested in sponsoring further research please contact [email protected]. Questions about model usage can be asked in the disscussion tab.

Downloads last month
42
Safetensors
Model size
1.1B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NilanE/tinyllama-en_ja-translation-v3

Finetuned
(2)
this model

Dataset used to train NilanE/tinyllama-en_ja-translation-v3