ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 168 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 125 • 1 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 106 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 96 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 118 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 66 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 82
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 44 • 4 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 19 • 1
ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 168 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 125 • 1 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 106 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 96 • 1
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 44 • 4 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 19 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 118 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 66 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 82