Question about MTEB Benchmark Settings : 'max_seq_length'😭
#25 opened 14 days ago
by
george31
Different tokenizer silently being loaded based on `trust_remote_code`
#24 opened about 1 month ago
by
DarkLight1337
is there possible get the sparse embedding?
3
#23 opened about 1 month ago
by
weiminw
How to change the embedding dimision?
1
#19 opened 2 months ago
by
storm2008
用eval_mteb.py算出来的mteb指标和Leaderboard展示的差距很大,不清楚为什么?
1
#16 opened 3 months ago
by
YangGuang30
Customized Further Fine-Tuning by Users
#15 opened 3 months ago
by
fwj
Model keeps cache of generation in Transformers (fixed using torch.no_grad())
1
#14 opened 3 months ago
by
Pietroferr
gte-Qwen2-1.5B-instruct模型半精度推理时,模型输出结果NAN
2
#13 opened 3 months ago
by
Erin
Qwen 2.5 1.5B retrain?
4
#12 opened 3 months ago
by
tomaarsen
mteb 测试速度问题
2
#10 opened 4 months ago
by
xiaopli11
Support of Xformer and FlashAttnention
1
#9 opened 5 months ago
by
le723z
ONNX.data
#8 opened 5 months ago
by
Saugatkafley
Fine-tunning
#5 opened 5 months ago
by
deleted
sequence classification
1
#3 opened 6 months ago
by
prudant
score mteb french
3
#2 opened 6 months ago
by
abhamadi
"Bidirectional attention"
2
#1 opened 6 months ago
by
olivierdehaene