This repository contains the model described in the paper A Systematic Investigation of Distilling Large Language Models into Cross-Encoders for Passage Re-ranking.
The code for training and evaluation can be found at https://github.com/webis-de/msmarco-llm-distillation.
- Downloads last month
- 34
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-ranking models for lightning-ir
library.
Model tree for webis/monoelectra-large
Base model
google/electra-large-discriminator