wmt/wmt14
Viewer • Updated • 47.8M • 12.5k • 29
How to use kuleshov-group/e2d2-wmt with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("feature-extraction", model="kuleshov-group/e2d2-wmt", trust_remote_code=True) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("kuleshov-group/e2d2-wmt", trust_remote_code=True, dtype="auto")Configuration Parsing Warning:In tokenizer_config.json: "tokenizer_config.chat_template" must be one of [string, array]
To use this models, follow the snippet below:
from transformers import AutoModelForMaskedLM
# model_config_overrides = {} # Use this to optionally override config parameters
model = AutoModelForMaskedLM.from_pretrained(
"kuleshov-group/e2d2-wmt",
trust_remote_code=True,
# **model_config_overrides,
)
wmt/wmt14Qwen/Qwen3-0.6B-BaseSee the project site for more details and link to the paper and code: https://m-arriola.com/e2d2/
@inproceedings{
arriola2025e2d2,
title={Encoder-Decoder Diffusion Language Models for Efficient Training and Inference},
author={Marianne Arriola and Yair Schiff and Hao Phung and Aaron Gokaslan and Volodymyr Kuleshov},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://arxiv.org/abs/2510.22852}
}